From nginx-forum at nginx.us Tue Apr 1 00:13:20 2014 From: nginx-forum at nginx.us (jakubp) Date: Mon, 31 Mar 2014 20:13:20 -0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: References: <201303281832.22834.vbart@nginx.com> Message-ID: Hi I am struggling with the very same issue at the moment... If I read the right the code correctly all that nginx cares about is cache size, keys zone size is not checked at all (except when more space needs to be allocated). ngx_http_file_cache_manager(void *data) { // if (size < cache->max_size) { return next; } wait = ngx_http_file_cache_forced_expire(cache); Are there any plans to monitor keys zone size and remove a chunk of LRU keys if it's close to being full? Rgds Jakub Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237829,248885#msg-248885 From ben at indietorrent.org Tue Apr 1 01:38:07 2014 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 31 Mar 2014 21:38:07 -0400 Subject: Defining a default server for when vhost does not exist for requested hostname (including blank hostname), for http and https In-Reply-To: <20140328175847.GU34696@mdounin.ru> References: <533587DB.6050805@indietorrent.org> <20140328154547.GQ34696@mdounin.ru> <5335A885.1080303@indietorrent.org> <20140328175847.GU34696@mdounin.ru> Message-ID: <533A187F.3030504@indietorrent.org> On 3/28/2014 1:58 PM, Maxim Dounin wrote: > Nobody care enough to submit a patch. > Likely due to the fact that SNI isn't considered to be an option > for serious SSL-enabled sites anyway due to still limited > client-side support, see here for details: > > http://en.wikipedia.org/wiki/Server_Name_Indication#Client_side Makes sense! And I can live with the SSL warning that results when I employ the method I described a few posts back. Thanks again for your help, everyone! -Ben From lists at ruby-forum.com Tue Apr 1 06:29:48 2014 From: lists at ruby-forum.com (Mapper Uno) Date: Tue, 01 Apr 2014 08:29:48 +0200 Subject: Accessing HTTP request headers in nginx module In-Reply-To: <2fb7eec30ec5bd2d55c012d70ed8b7cc@ruby-forum.com> References: <66541aa62d7dd3d123637da70212d406@ruby-forum.com> <19b977753b5ddd0b3ce3821f0bf470c1@ruby-forum.com> <20140328130710.GM34696@mdounin.ru> <2fb7eec30ec5bd2d55c012d70ed8b7cc@ruby-forum.com> Message-ID: I could finally get hold of all the http header fields. A nice link that describes how to access the headers http://wiki.nginx.org/HeadersManagement Thanks for all the replies. Mapper Uno wrote in post #1141373: > Thanks Maxim for your reply. Since I am newbie, please excuse my > questions. I am still unable to retrieve the variable. > > All I have in the handler routine is: ngx_http_request_t *r > I can see that r->headers_in.headers is a list, but then > when you say $http_operation, it is confusing me. > > Could you please explain > > Maxim Dounin wrote in post #1141328: >> Hello! >> >> On Fri, Mar 28, 2014 at 01:06:00AM +0100, Mapper Uno wrote: >> >>> indicates that these are not "custom" headers. With reference to my >>> above example, how can I access my custom header "OPERATION" in module >>> handler ? >> >> Please make sure to read not only the first sentence. Note the >> "Also there are other variables" in the same paragraph. The >> $http_* variables provide access to all headers, including any >> custom ones. And it is documented as: >> >> $http_name >> arbitrary request header field; the last part of a variable name >> is the field name converted to lower case with dashes replaced by >> underscores >> >> Therefore, you may either use the $http_operation variable to >> access the header you are looking for. Or you may take a look at >> the source code to find out how it's implemented. Take a look at >> the src/http/ngx_http_variables.c, functions >> ngx_http_variable_unknown_header_in() and >> ngx_http_variable_unknown_header() (first one says the header >> should be searched in r->headers_in.headers list, second one does >> actual search). >> >> -- >> Maxim Dounin >> http://nginx.org/ -- Posted via http://www.ruby-forum.com/. From pablo.platt at gmail.com Tue Apr 1 07:18:58 2014 From: pablo.platt at gmail.com (pablo platt) Date: Tue, 1 Apr 2014 10:18:58 +0300 Subject: Tunnel TLS similar to Websockets Message-ID: Hi, Is it possible to tunnel TLS on one host without terminating it like Websockets are tunneled? http://nginx.org/en/docs/http/websocket.html location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } If I understand correctly, both Websockets and TLS send HTTP/1.1 CONNECT request and than Upgrade header. Can I use proxy_pass in a server block? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ngw at nofeed.org Tue Apr 1 10:34:21 2014 From: ngw at nofeed.org (Nicholas Wieland) Date: Tue, 1 Apr 2014 12:34:21 +0200 Subject: nginx doesn't seem to register configuration changes Message-ID: Hi *, I?m using nginx as reverse proxy for some puma backends via unix socket. The problem I?m having right now is that even after several reloads nginx doesn?t seem to use the changes I did to the configuration. I honestly have no idea what to try as this is definitely very weird. The problem appears to be the upstream directive, nginx keeps using the old url to the old socket, even though I changed it. deployer at demo:~$ uname -a Linux demo.ec.thefool.it 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux deployer at demo:~$ sudo nginx -v nginx version: nginx/1.4.1 My conf: https://gist.github.com/ngw/91312b5602816cfb2632 The error: 014/04/01 12:10:43 [crit] 30954#0: *1 connect() to unix:///home/deployer/apps/conversationflow/puma.sock failed (2: No such file or directory) while connecting to upstream, client: 93.51.167.60, server: demo.ec.thefool.it, request: "GET / HTTP/1.1", upstream: "http://unix:///home/deployer/apps/conversationflow/puma.sock:/", host: ?demo.ec.thefool.it" This is the only error I get. As you can see puma.sock is in the wrong place, the correct one is the one I configured (obviously). I?ve also tried to change the socket path to something I made up, and nginx registers the change and behaves accordingly. If I change the socket path to the real one, here we go and it doesn?t use it? Any suggestion? ? ngw --? Nicholas Wieland Sent with Airmail -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 1 11:01:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Apr 2014 15:01:34 +0400 Subject: nginx doesn't seem to register configuration changes In-Reply-To: References: Message-ID: <20140401110134.GT34696@mdounin.ru> Hello! On Tue, Apr 01, 2014 at 12:34:21PM +0200, Nicholas Wieland wrote: > Hi *, I?m using nginx as reverse proxy for some puma backends > via unix socket. > The problem I?m having right now is that even after several > reloads nginx doesn?t seem to use the changes I did to the > configuration. I honestly have no idea what to try as this is > definitely very weird. The problem appears to be the upstream > directive, nginx keeps using the old url to the old socket, even > though I changed it. > > deployer at demo:~$ uname -a > Linux demo.ec.thefool.it 3.2.0-37-virtual #58-Ubuntu SMP Thu Jan 24 15:48:03 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux > deployer at demo:~$ sudo nginx -v > nginx version: nginx/1.4.1 > > My conf: > https://gist.github.com/ngw/91312b5602816cfb2632 > > The error: > 014/04/01 12:10:43 [crit] 30954#0: *1 connect() to unix:///home/deployer/apps/conversationflow/puma.sock failed (2: No such file or directory) while connecting to upstream, client: 93.51.167.60, server: demo.ec.thefool.it, request: "GET / HTTP/1.1", upstream: "http://unix:///home/deployer/apps/conversationflow/puma.sock:/", host: ?demo.ec.thefool.it" > > This is the only error I get. > As you can see puma.sock is in the wrong place, the correct one > is the one I configured (obviously). > I?ve also tried to change the socket path to something I made > up, and nginx registers the change and behaves accordingly. If I > change the socket path to the real one, here we go and it > doesn?t use it? > Any suggestion? Take a look at global error log as defined in your nginx.conf, likely it has an explanation. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Apr 1 11:04:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Apr 2014 15:04:15 +0400 Subject: [nginx] Vary header is repeated twice in response In-Reply-To: References: Message-ID: <20140401110415.GU34696@mdounin.ru> Hello! On Tue, Apr 01, 2014 at 02:54:56PM +0400, Yury Kirpichev wrote: > Hi, > > I've got an issue that "Vary" header is repeated twice in response when > "gzip_vary on" is specified in config file; > > My configuration is the following: > Two instances of nginx are running on different hosts (A and B) > There is > location /smth/ { > proxy_pass http://B/smth; > } > > and > gzip_vary on; > in config for host A. > > B adds "Vary: Accept-Encoding" in response for http://B/smth > > And then if http://A/smth request is performed "Vary" header is returned > twice in response. > < Connection: keep-alive > < Vary: Accept-Encoding > < Vary: Accept-Encoding > < date: Tue, 01 Apr 2014 10:02:27 GMT > < expires: Tue, 01 Apr 2014 10:07:27 GMT > < server: nginx/1.4.4 > > > Could you please help me to resolve this problem? Is it known issue or it > is normal behaviour or may be something wrong on my side? This is certainly not relevant to nginx-devel@, please use nginx@ mailing list for such questions (Cc'd). -- Maxim Dounin http://nginx.org/ From ngw at nofeed.org Tue Apr 1 11:11:47 2014 From: ngw at nofeed.org (Nicholas Wieland) Date: Tue, 1 Apr 2014 13:11:47 +0200 Subject: nginx doesn't seem to register configuration changes In-Reply-To: <20140401110134.GT34696@mdounin.ru> References: <20140401110134.GT34696@mdounin.ru> Message-ID: On April 1, 2014 at 1:01:46 PM, Maxim Dounin (mdounin at mdounin.ru) wrote: > Any suggestion? Take a look at global error log as defined in your nginx.conf, likely it has an explanation. Not really, the error I posted is actually the only error I get. My best guess is that for some reasons nginx doesn?t like the path I pass and uses the previous one, but this is a very wild guess. ? ngw -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 1 11:42:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Apr 2014 15:42:34 +0400 Subject: nginx doesn't seem to register configuration changes In-Reply-To: References: <20140401110134.GT34696@mdounin.ru> Message-ID: <20140401114234.GW34696@mdounin.ru> Hello! On Tue, Apr 01, 2014 at 01:11:47PM +0200, Nicholas Wieland wrote: > On April 1, 2014 at 1:01:46 PM, Maxim Dounin (mdounin at mdounin.ru) wrote: > > > Any suggestion? > > > Take a look at global error log as defined in your nginx.conf, likely > > it has an explanation. > > > Not really, the error I posted is actually the only error I get. > My best guess is that for some reasons nginx doesn?t like the > path I pass and uses the previous one, but this is a very wild > guess. The error you've posted is expected to appear in your per-server error log, as defined in the include file you've shared with us. This is not the log you should look at, no global errors will be logged to it - you should look at the global log, as defined in your nginx.conf (not in an include file, but in the nginx.conf itself). If you don't see any errors there - make sure to configure at least "notice" logging level and test if you see relevant notice messages during configuration reload. If you don't - you are probably looking into a wrong file. See here for documentation: http://nginx.org/r/error_log -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Tue Apr 1 11:48:00 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 1 Apr 2014 13:48:00 +0200 Subject: Binary upgrade from package Message-ID: Hello, I am wondering what steps the upgrade through official packages of an existing nginx process follow. I am ujsing official Debian packages. More precisely, when the new binary is installed through the package system, what signal(s) is(are) sent to migrate rfrom the old to the new binary? Is it a simple stop/start? Or does the package upgrade follows steps from a simple on-the-fly upgrade? By it, I mean the following steps: 1?) Configuration check, if unsuccessful upgrade fails 2?) USR2 to the old master 3?) Start of the new binary, if failed HUP to the old master process and upgrade fails 4?) QUIT to the old master Of course, no edge problem such as the rtsig method would be supported, nor any check for the new master process working properly (spawning childs, answering requests) since that is maybe beyond the scope of a simple automated package upgrade script, and some of the checks might require human eyes and expertise. That would ensure no downtime when official upgrades (including security updates) are done. Is all that already implemented? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 1 12:32:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Apr 2014 16:32:50 +0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: References: <201303281832.22834.vbart@nginx.com> Message-ID: <20140401123249.GX34696@mdounin.ru> Hello! On Mon, Mar 31, 2014 at 08:13:20PM -0400, jakubp wrote: > Hi > > I am struggling with the very same issue at the moment... > > If I read the right the code correctly all that nginx cares about is cache > size, keys zone size is not checked at all (except when more space needs to > be allocated). > > ngx_http_file_cache_manager(void *data) > { > // > if (size < cache->max_size) { > return next; > } > > wait = ngx_http_file_cache_forced_expire(cache); > > Are there any plans to monitor keys zone size and remove a chunk of LRU keys > if it's close to being full? Isn't max_size and inactive work for you? -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Tue Apr 1 17:59:45 2014 From: lists at ruby-forum.com (Sudara Williams) Date: Tue, 01 Apr 2014 19:59:45 +0200 Subject: Proxying large downloads from s3 Message-ID: Hi guys! We are proxying files from s3 through our app and had a couple questions on ideal config. When I arrived on the scene, the following config was in place: location /download/ { internal; proxy_pass https://s3.amazonaws.com; proxy_buffering off; proxy_buffers 2 4m; proxy_buffer_size 4m; proxy_busy_buffers_size 4m; } I was called in because the server started running out of memory. (It seemed fine for a long time, probably just didn't max out for a while) After looking up config, it seemed like 4m was VERY large proxy_buffer size. I was uncertain about whether proxy_buffering should be off or not. https://coderwall.com/p/rlguog recommends it off, but most of the mailing list replies say unless you are using Comet or something, there's no need for it to be off. We removed all the extra proxy config and we ended up with something like this: location /download/ { internal; proxy_pass https://s3.amazonaws.com; proxy_buffering off; chunked_transfer_encoding off; } We also tried with proxy_buffering on. In both cases it seems like we are seeing truncated responses. This especially happens on large files ??the file will just be "done" downloading early, and the zip file will be corrupt. We are also seeing errors like this, but uncertain if it's related. 2014/03/20 00:02:36 [error] 15519#0: *24132 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 207.154.10.35, server: localhost, request: "GET /download/?uuid=bdba5a17f58c8c808b6cc7fd97cf752a7b80ce1cc6eba6f0b56e411ef7cf3135 HTTP/1.1", upstream: "https://192.168.99.1:443/utilities/s3_manifest_for_nginx_to_zip_and_stream/?uuid=bdbaecc7fd97cf752a7b80ce1cc6eoaoef7cf3135", host: "streaming.somesite.com" I was suspicious of chunked_transfer_encoding being off, so config now looks like this: location /pro-core.com/ { internal; proxy_pass https://s3.amazonaws.com; proxy_max_temp_file_size 256m; proxy_read_timeout 300; } We are now waiting to find out if our large downloads are still being interrupted. But we would appreciate any advice about proxying large files from s3. Should proxy_buffering be on or off? Thanks :) Sudara -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Tue Apr 1 18:23:27 2014 From: nginx-forum at nginx.us (skyice) Date: Tue, 01 Apr 2014 14:23:27 -0400 Subject: 404 error with rewriting rule Message-ID: <1e1c1525a5b29195a6d8a8fa040c2a80.NginxMailingListEnglish@forum.nginx.org> Hello, With this rule : rewrite ^/([^/])(/.*) $2?locale=$1&$query_string last; I get a 404 error when I try go on http://example.org/en/test.php instead of http://example.org/test.php?locale=en Anyone have any idea to solve the problem ? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248905,248905#msg-248905 From nginx-forum at nginx.us Tue Apr 1 19:03:00 2014 From: nginx-forum at nginx.us (hpatoio) Date: Tue, 01 Apr 2014 15:03:00 -0400 Subject: Cache permissions problem Message-ID: <10e722367bf6442b114e7c3c460880dc.NginxMailingListEnglish@forum.nginx.org> Hello. I've a custom build of nginx (1.4.6) with ngx_cache_purge module (2.1) In the config file I've a cache zone declared like this: proxy_cache_path /tmp/nginx_cache keys_zone=MYCACHE:10m; I don't have specified any "user" (http://wiki.nginx.org/CoreModule#user) in my config. When I run nginx with my user the cache directory is create with permissions drwx------ and is owned by my user and my group The problem is that nothing is written in the cache directory. If I run nginx with sudo everything works. Where am I wrong ? Hints ? Thanks -- Simone Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248906,248906#msg-248906 From francis at daoine.org Tue Apr 1 21:31:40 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 1 Apr 2014 22:31:40 +0100 Subject: 404 error with rewriting rule In-Reply-To: <1e1c1525a5b29195a6d8a8fa040c2a80.NginxMailingListEnglish@forum.nginx.org> References: <1e1c1525a5b29195a6d8a8fa040c2a80.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140401213140.GC16942@daoine.org> On Tue, Apr 01, 2014 at 02:23:27PM -0400, skyice wrote: Hi there, > With this rule : > > rewrite ^/([^/])(/.*) $2?locale=$1&$query_string last; > > I get a 404 error when I try go on http://example.org/en/test.php instead of > http://example.org/test.php?locale=en Does this rewrite regex match this request? What does the debug log say? f -- Francis Daly francis at daoine.org From crirus at gmail.com Wed Apr 2 06:30:52 2014 From: crirus at gmail.com (Cristian Rusu) Date: Wed, 2 Apr 2014 09:30:52 +0300 Subject: Transfering url arguments to different location Message-ID: Hello I have a setup on nginx to count downloads. *location / {. post_action /afterdownload;* Here I have a value in $arg_key *location /afterdownload {* I need $arg_key here Any way to send it to /afterdownload section? Thank you --------------------------------------------------------------- Cristian Rusu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayadev at ymail.com Wed Apr 2 08:01:07 2014 From: jayadev at ymail.com (Jayadev C) Date: Wed, 2 Apr 2014 01:01:07 -0700 (PDT) Subject: Keepalive not working with upstream tcp server Message-ID: <1396425667.98298.YahooMailNeo@web163501.mail.gq1.yahoo.com> I am trying to use nginx to proxy my requests to a custom tcp backend server that I have. I am following the same model as the default memcached module with in the nginx code base (1.5.10) , the relevant config file attached.? Tried with most of the keepalive options but I still see new connections getting created to upstream server. (rather I do see the connection getting closed by nginx in strace) Do I need to specifically compile nginx with http upstream keepalive module, thought it was default enabled but the code doesn't seem to go through it looking in gdb.? Is there any other specific module? or setting I need to use for the upstream tcp persistent connection usecase. Thanks in advance, Jayadev --- nginx.conf------ http { ??? default_type? application/octet-stream; ??? keepalive_timeout? 100; ??? proxy_http_version 1.1; ??? upstream my_backend { ??????????? server 127.0.0.1:1111; ??????????? keepalive 100; ??? } ??? server { ???????? listen?????? 4080; ???????? keepalive_timeout? 100; ???????? keepalive_requests 100000; ???????? proxy_http_version 1.1; ???????? proxy_set_header Connection keepalive; ???????? location / { ???????????????? root?? html; ???????????????? index? index.html index.htm; ???????????????? my_pass my_backend; ???????? } ??? } -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Apr 2 09:11:17 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Apr 2014 10:11:17 +0100 Subject: Transfering url arguments to different location In-Reply-To: References: Message-ID: On 2 Apr 2014 07:31, "Cristian Rusu" wrote: > > Hello > > I have a setup on nginx to count downloads. > > > location / {. > post_action /afterdownload; > > Here I have a value in $arg_key > > location /afterdownload { > > I need $arg_key here > > Any way to send it to /afterdownload section? I believe it's available as a variable in that section already. You can add it as a custom header or path component however you're most comfortable. But why not reference it in the post_action statement? (I don't know that post_action /can/ take variables, so this suggestion might be null and void in the face of the documentation ... :-)) As an aside, IIRC people @nginx have stated publicly that post_action is a hack, and that its behaviour should not be relied on. I'm without Internets right now so can't find you the quote, but it was sufficient to put me off using it for an audit function a while back. Plus it pollutes your access logs with the final URI and/or response code served, not the first. HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Wed Apr 2 09:11:17 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Apr 2014 10:11:17 +0100 Subject: 404 error with rewriting rule In-Reply-To: <1e1c1525a5b29195a6d8a8fa040c2a80.NginxMailingListEnglish@forum.nginx.org> References: <1e1c1525a5b29195a6d8a8fa040c2a80.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 1 Apr 2014 19:23, "skyice" wrote: > > Hello, > > With this rule : > > rewrite ^/([^/])(/.*) $2?locale=$1&$query_string last; Your first capture is looking for exactly 1 character. Do you really mean that? J -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 2 09:30:55 2014 From: nginx-forum at nginx.us (jakubp) Date: Wed, 02 Apr 2014 05:30:55 -0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: <20140401123249.GX34696@mdounin.ru> References: <20140401123249.GX34696@mdounin.ru> Message-ID: <500363df3ad3fef7ba8584fa9d807824.NginxMailingListEnglish@forum.nginx.org> Hi Maxim Let me explain the use case. I am using cache module to serve very large library. Some files are very popular but a ot of them are not popular at all though. To deal with this long tail I use proxy_cache_min_uses to cache only after it was requested several times. So what I think happens is that disk is not a limiting factor (at least not enough) but keys zone grows very quickly. Thanks, Kuba Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237829,248913#msg-248913 From al-nginx at none.at Wed Apr 2 11:03:41 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 02 Apr 2014 13:03:41 +0200 Subject: What is better location regex or map regex? Message-ID: <714ab9f5a394029de9d1b2910f3e8090@none.at> Hi. I try to transform the pligg htaccess rules to nginx. https://github.com/Pligg/pligg-cms/blob/master/htaccess.default There is one from 2010 http://www.edwardawebb.com/web-development/running-pligg-nginx-rewrite-rules this transformation have some optimization potential, imho ;-). I would use location / { rewrite $uri $dest last; try_files $uri /index.php; } instead of if (!-e $request_filename){ .... } and map $uri $dest { '~^/advanced-search/?$' '/advancedsearch.php'; '~^/profile/?' '/profile.php'; ... '~^/([^/]+)/?$' '/index.php?category=$1'; '~^/([^/]+)/page/([^/]+)/?$' '/index.php?category=$1&page=$2'; } or location ~ ^/advanced-search/?$ { rewrite ^/advanced-search/?$ /advancedsearch.php last; } location ~ ^/profile/? { rewrite ^/profile/? /profile.php last; } ... What is your suggestion for the fastest and smallest solution. thanks aleks From batuhangoksu at gmail.com Wed Apr 2 11:24:01 2014 From: batuhangoksu at gmail.com (=?UTF-8?Q?Batuhan_G=C3=B6ksu?=) Date: Wed, 2 Apr 2014 14:24:01 +0300 Subject: nginx upload module compile the problem. Message-ID: hi friends, nginx version 1.4.7 upload module version 2.0.12 -- Sincerely, Batuhan G?ksu -------------- next part -------------- An HTML attachment was scrubbed... URL: From batuhangoksu at gmail.com Wed Apr 2 11:24:46 2014 From: batuhangoksu at gmail.com (=?UTF-8?Q?Batuhan_G=C3=B6ksu?=) Date: Wed, 2 Apr 2014 14:24:46 +0300 Subject: nginx upload module compile the problem. In-Reply-To: References: Message-ID: hi friends, nginx version 1.4.7 upload module version 2.0.12 --- I want to compile along with nginx upload module. I'm getting the following error during compilation How can I solve this problem /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:1028:13: warning: 'MD5_Update' is deprecated: first deprecated in OS X 10.7 [-Wdeprecated-declarations] MD5Update(&u->md5_ctx->md5, buf, len); ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:19:21: note: expanded from macro 'MD5Update' #define MD5Update MD5_Update ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/openssl/md5.h:114:5: note: 'MD5_Update' declared here int MD5_Update(MD5_CTX *c, const void *data, size_t len) DEPRECATED_IN_MAC_OS_X_VERSION_10_7... ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:1031:13: warning: 'SHA1_Update' is deprecated: first deprecated in OS X 10.7 [-Wdeprecated-declarations] SHA1_Update(&u->sha1_ctx->sha1, buf, len); ^ /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/openssl/sha.h:122:5: note: 'SHA1_Update' declared here int SHA1_Update(SHA_CTX *c, const void *data, size_t len) DEPRECATED_IN_MAC_OS_X_VERSION_10_... ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:1869:17: error: no member named 'to_write' in 'ngx_http_request_body_t' rb->to_write = rb->bufs; ~~ ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:1928:9: error: no member named 'to_write' in 'ngx_http_request_body_t' rb->to_write = rb->bufs; ~~ ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:2010:59: error: no member named 'to_write' in 'ngx_http_request_body_t' rc = ngx_http_process_request_body(r, rb->to_write); ~~ ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:2026:21: error: no member named 'to_write' in 'ngx_http_request_body_t' rb->to_write = rb->bufs->next ? rb->bufs->next : rb->bufs; ~~ ^ /Users/batuhangoksu/Desktop/nginx_upload_module-2.0.12/ngx_http_upload_module.c:2118:47: error: no member named 'to_write' in 'ngx_http_request_body_t' rc = ngx_http_process_request_body(r, rb->to_write); ~~ ^ 6 warnings and 5 errors generated. make[1]: *** [objs/addon/nginx_upload_module-2.0.12/ngx_http_upload_module.o] Error 1 make: *** [build] Error 2 On Wed, Apr 2, 2014 at 2:24 PM, Batuhan G?ksu wrote: > hi friends, > > nginx version 1.4.7 > upload module version 2.0.12 > > > > -- > Sincerely, > Batuhan G?ksu > -- Sincerely, Batuhan G?ksu -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 2 12:20:41 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Apr 2014 16:20:41 +0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: <500363df3ad3fef7ba8584fa9d807824.NginxMailingListEnglish@forum.nginx.org> References: <20140401123249.GX34696@mdounin.ru> <500363df3ad3fef7ba8584fa9d807824.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140402122041.GA34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 05:30:55AM -0400, jakubp wrote: > Hi Maxim > > Let me explain the use case. > I am using cache module to serve very large library. Some files are very > popular but a ot of them are not popular at all though. To deal with this > long tail I use proxy_cache_min_uses to cache only after it was requested > several times. So what I think happens is that disk is not a limiting factor > (at least not enough) but keys zone grows very quickly. What currently can be used for such a use case is "inactive=" parameter of the proxy_cache_path directive (see http://nginx.org/r/proxy_cache_path). It ensures that items not recently requested are removed from the cache, including ones created with proxy_cache_min_uses. Have you tried tuning it? -- Maxim Dounin http://nginx.org/ From carsten.germer at intolabs.net Wed Apr 2 12:48:33 2014 From: carsten.germer at intolabs.net (Carsten Germer) Date: Wed, 2 Apr 2014 14:48:33 +0200 Subject: problem with echo_before when proxying a server which sends gzipped content Message-ID: Hi everyone, currently I'm, trying to configure NGINX as a proxy for JSON from the iTunes API. It's for a small game, iTunes is slow sometimes and the data for the game is mostly the same for a good length of time, anyway. The JSON from iTunes is to be padded with the original requests callback parameters. For this there are many good posts out on the net, but I can't seem to get the basic echo_* to work. I boiled my configuration of nginx down to the point where I just use echo_before or echo_after and proxy_pass. If I append something with echo_after it works fine in browsers and in jQuery. If I prepend anything with echo_before the answer can't be read by browsers, "curl --compressed" throws "curl: (23) Error while processing content unencoding: invalid block type". If I configure Firefox with "about:config" to "network.http.accept-encoding:true" it fixes fixes display in Firefox. When I look in the network tab of chrome console I see that requesting ".../echo-after/" closes the request after 2Xms. Requesting ".../echo-before" also gets 200 ok but never arrives fully, is shown as "pending" indefinitely. My best bet is, that it has something to do with gzip-compressed answer from iTunes but I can't find any solution or even hint for my level of understanding of the inner workings of nginx. One more info: I tried many different combinations of "gzip on|off" and other directives, basically I indiscriminately tried everything I found mentioned somewhere. But, nothing changed the behavior much as far as I could see, so I stripped it out of the configuration again for readability. # Proxy iTunes for bug hunting # test by accessing http://json.musiguess.com/itunes/[raw|echo-after|echo-before]/ location /itunes/raw/ { # suppress proxying for testing completely proxy_cache off; # getting a json containing N current top albums from itunes proxy_pass http://itunes.apple.com/de/rss/topalbums/limit=2/explicit=true/json; } location /itunes/echo-after/ { proxy_cache off; proxy_pass http://itunes.apple.com/de/rss/topalbums/limit=2/explicit=true/json; # echo something after body echo_after_body ");"; } location /itunes/echo-before/ { proxy_cache off; # echo something before body echo_before_body -n "abc("; proxy_pass http://itunes.apple.com/de/rss/topalbums/limit=2/explicit=true/json; } location /itunes/echo-beforeNafter/ { proxy_cache off; echo_before_body -n "abc("; proxy_pass http://itunes.apple.com/de/rss/topalbums/limit=2/explicit=true/json; echo_after_body ");"; } You are very welcome to access http://json.musiguess.com/itunes/... if it may help you assess my problem. ~# nginx -V nginx version: nginx/1.5.10 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --add-module=/var/cache/nginx/nginx-1.5.10/src/echo --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security' --with-ld-opt=-Wl,-z,relro --with-ipv6 Echo v0.51 Any help is greatly appreciated! Cheers /Carsten --- Carsten Germer Creative Director intolabs GmbH http://www.intolabs.net/ From reallfqq-nginx at yahoo.fr Wed Apr 2 13:13:35 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 2 Apr 2014 15:13:35 +0200 Subject: Transfering url arguments to different location In-Reply-To: References: Message-ID: On Wed, Apr 2, 2014 at 11:11 AM, Jonathan Matthews wrote: > As an aside, IIRC people @nginx have stated publicly that post_action is a > hack, and that its behaviour should not be relied on. I'm without Internets > right now so can't find you the quote, but it was sufficient to put me off > using it for an audit function a while back. Plus it pollutes your access > logs with the final URI and/or response code served, not the first. > ?If someone had details about reasons not to use post_action (either from the referenced IIRC discussion or from other sources), I would be very interested in them?. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed Apr 2 13:22:03 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 2 Apr 2014 15:22:03 +0200 Subject: What is better location regex or map regex? In-Reply-To: <714ab9f5a394029de9d1b2910f3e8090@none.at> References: <714ab9f5a394029de9d1b2910f3e8090@none.at> Message-ID: I would use: - rewrite directives only at server level, no need for location here - a single regex with OR logic to match both the advancedsearch and the profile URI since the rewriting grammar is the same - a single regex for both 'index.php' rewritings, since the grammar of category+page is the same as the grammar for page with an optionally added component To sum up: 2 rewrite directives at server level. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Wed Apr 2 13:25:42 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Wed, 2 Apr 2014 18:55:42 +0530 Subject: how to use keepalive with Nginx revers proxy? Message-ID: Hi, Can some one provide me an example to set keep alive connection between Nginx(reverse proxy) and backend server? I can not use upstream module as my backend IP is dynamic based on variable. So I can not use keepalive directive of upstream. I have used below directive in location block. proxy_pass http://$IP ; Thanks, Makaikol -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Wed Apr 2 13:42:54 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Wed, 2 Apr 2014 19:12:54 +0530 Subject: How to send proxy cache status to backend server? In-Reply-To: <20140327124947.GW34696@mdounin.ru> References: <20140317025022.GQ34696@mdounin.ru> <20140319141532.GY34696@mdounin.ru> <20140320132636.GE34696@mdounin.ru> <20140327113257.GT34696@mdounin.ru> <20140327124947.GW34696@mdounin.ru> Message-ID: Hi, In my configuration I have caching layer of Nginx and a separate proxy layer which works as reverse proxy to original upstream backend. As discussed previously we can pass cache status from caching layer to upstream but is there anything else which I can get from cache file (when cache is expired) and pass to upstream? I want to set some condition to my proxy layer to select original upstream based on that. Thanks, Makailol On Thu, Mar 27, 2014 at 6:19 PM, Maxim Dounin wrote: > Hello! > > On Thu, Mar 27, 2014 at 05:08:31PM +0530, Makailol Charls wrote: > > > Hi, > > > > Would it be possible to add this as new feature? > > > > Is there some other alternative ? Actually based on this header value I > > want to select named based location. > > Response headers of expires cached responses are not read from a > cache file. If you really want this to happen, you may try to > implement this, but I don't it's looks like a generally usable > feature. In most if not all cases it will be just a waste of > resources. > > > > > > > Thanks, > > Makailol > > > > > > On Thu, Mar 27, 2014 at 5:02 PM, Maxim Dounin > wrote: > > > > > Hello! > > > > > > On Thu, Mar 27, 2014 at 03:01:22PM +0530, Makailol Charls wrote: > > > > > > > Hi Maxim, > > > > > > > > Apart from passing cache status to backend, would it be possible to > send > > > > some other headers which are stored in cache? > > > > > > > > For example, If backed sets header "Foo : Bar" , which is stored in > > > cache. > > > > Now when cache is expired , request will be sent to backend. At that > time > > > > can we send the value of Foo header stored in cache to upstream > backend? > > > > > > > > I tried to achieve this with below code but it could not work. > > > > proxy_set_header Foo $upstream_http_Foo; > > > > > > > > Would you suggest me how to achieve this or what am I doing wrong > here. > > > > > > This is not something possible. > > > > > > > > > > > Thanks, > > > > Makailol > > > > > > > > > > > > > > > > > > > > On Thu, Mar 20, 2014 at 6:56 PM, Maxim Dounin > > > wrote: > > > > > > > > > Hello! > > > > > > > > > > On Thu, Mar 20, 2014 at 09:38:40AM +0530, Makailol Charls wrote: > > > > > > > > > > > Hi, > > > > > > > > > > > > Is there some way to achieve this? I want to pass requests to > backend > > > > > based > > > > > > on cache status condition. > > > > > > > > > > This is not something easily possible, as cache status is only > > > > > known after we started processing proxy_pass and already know > > > > > which backend will be used. (Note that by default proxy_cache_key > > > > > uses $proxy_host, which wouldn't be known otherwise.) > > > > > > > > > > If you want to check BYPASS as in your previous message, I would > > > > > recommend checking relevant conditions from proxy_cache_bypass > > > > > separately. As a more generic though less effective aproach, an > > > > > additional proxy layer may be used. > > > > > > > > > > -- > > > > > Maxim Dounin > > > > > http://nginx.org/ > > > > > > > > > > _______________________________________________ > > > > > nginx mailing list > > > > > nginx at nginx.org > > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 2 13:46:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Apr 2014 17:46:59 +0400 Subject: problem with echo_before when proxying a server which sends gzipped content In-Reply-To: References: Message-ID: <20140402134659.GD34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 02:48:33PM +0200, Carsten Germer wrote: > Hi everyone, > currently I'm, trying to configure NGINX as a proxy for JSON > from the iTunes API. > It's for a small game, iTunes is slow sometimes and the data for > the game is mostly the same for a good length of time, anyway. > The JSON from iTunes is to be padded with the original requests > callback parameters. For this there are many good posts out on > the net, but I can't seem to get the basic echo_* to work. > > I boiled my configuration of nginx down to the point where I > just use echo_before or echo_after and proxy_pass. > > If I append something with echo_after it works fine in browsers > and in jQuery. > If I prepend anything with echo_before the answer can't be read > by browsers, "curl --compressed" throws "curl: (23) Error while > processing content unencoding: invalid block type". > > If I configure Firefox with "about:config" to > "network.http.accept-encoding:true" it fixes fixes display in > Firefox. > > When I look in the network tab of chrome console I see that > requesting ".../echo-after/" closes the request after 2Xms. > Requesting ".../echo-before" also gets 200 ok but never arrives > fully, is shown as "pending" indefinitely. > > My best bet is, that it has something to do with gzip-compressed > answer from iTunes but I can't find any solution or even hint > for my level of understanding of the inner workings of nginx. Something like proxy_set_header Accept-Encoding ""; in relevant location should help. BTW, you may try add_before_body / add_after_body as available in standard addition filter module instead, see here: http://nginx.org/en/docs/http/ngx_http_addition_module.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 2 13:59:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Apr 2014 17:59:53 +0400 Subject: how to use keepalive with Nginx revers proxy? In-Reply-To: References: Message-ID: <20140402135953.GE34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 06:55:42PM +0530, Makailol Charls wrote: > Hi, > > Can some one provide me an example to set keep alive connection between > Nginx(reverse proxy) and backend server? > > I can not use upstream module as my backend IP is dynamic based on > variable. So I can not use keepalive directive of upstream. > > I have used below directive in location block. > proxy_pass http://$IP ; Use of keepalive connections require upstream{} block to be defined, see here for examples: http://nginx.org/r/keepalive As long as list of backend ip addresses is limited, you may define appropriate upstream{} blocks for each backend, and use upstream's name in a variable, e.g.: upstream backend1 { server 192.168.0.1; keepalive 2; } ... map $IP $backend { 192.168.0.1 backend1; ... } location / { proxy_pass http://$backend; proxy_http_version 1.1; proxy_set_header Connection ""; } -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 2 14:35:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Apr 2014 18:35:59 +0400 Subject: Proxying large downloads from s3 In-Reply-To: References: Message-ID: <20140402143558.GF34696@mdounin.ru> Hello! On Tue, Apr 01, 2014 at 07:59:45PM +0200, Sudara Williams wrote: [...] > interrupted. But we would appreciate any advice about proxying large > files from s3. Should proxy_buffering be on or off? There is no need to switch off proxy_buffering unless you are doing streaming and/or long polling. In most cases, proxy_buffering as seen in various configs is misused to disable disk buffering. This is wrong, proxy_max_temp_file_size should be used to control disk buffering. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 2 14:53:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Apr 2014 18:53:08 +0400 Subject: Keepalive not working with upstream tcp server In-Reply-To: <1396425667.98298.YahooMailNeo@web163501.mail.gq1.yahoo.com> References: <1396425667.98298.YahooMailNeo@web163501.mail.gq1.yahoo.com> Message-ID: <20140402145308.GG34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 01:01:07AM -0700, Jayadev C wrote: > I am trying to use nginx to proxy my requests to a custom tcp > backend server that I have. I am following the same model as the > default memcached module with in the nginx code base (1.5.10) , > the relevant config file attached.? Tried with most of the > keepalive options but I still see new connections getting > created to upstream server. (rather I do see the connection > getting closed by nginx in strace) [...] > ???????????????? my_pass my_backend; Looks like you did something wrong in you module. Note that memcached module explicitly sets u->keepalive when it's ok to keep a connection alive. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Wed Apr 2 15:05:36 2014 From: lists at ruby-forum.com (Sudara Williams) Date: Wed, 02 Apr 2014 17:05:36 +0200 Subject: Proxying large downloads from s3 In-Reply-To: References: Message-ID: <07d8888497b33ee576594faff540c48c@ruby-forum.com> Thanks Maxim! That is what I suspected with regards to proxy_buffering, as it is in line with your other responses on the list. With regards to early terminated / truncated large files ??is this something you or anyone else has seen before? I'll see if I can get some better logging going on and report back. Might be tough to correlate failed requests in production with log entries, but I'll do my best :) Sudara -- Posted via http://www.ruby-forum.com/. From contact at jpluscplusm.com Wed Apr 2 15:46:07 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 2 Apr 2014 16:46:07 +0100 Subject: Transfering url arguments to different location In-Reply-To: References: Message-ID: On 2 April 2014 14:13, B.R. wrote: > If someone had details about reasons not to use post_action (either from the > referenced IIRC discussion or from other sources), I would be very > interested in them. http://forum.nginx.org/read.php?2,213627,213722#msg-213722 and the note on the wiki next to the post_action documentation, which I believe refers to that same post. From mdounin at mdounin.ru Wed Apr 2 15:56:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Apr 2014 19:56:35 +0400 Subject: Proxying large downloads from s3 In-Reply-To: <07d8888497b33ee576594faff540c48c@ruby-forum.com> References: <07d8888497b33ee576594faff540c48c@ruby-forum.com> Message-ID: <20140402155635.GI34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 05:05:36PM +0200, Sudara Williams wrote: [...] > With regards to early terminated / truncated large files ??is this > something you or anyone else has seen before? > > I'll see if I can get some better logging going on and report back. > Might be tough to correlate failed requests in production with log > entries, but I'll do my best :) Upstream timeouts certainly may result in truncated responses sent to clients - if a timeout happens in the middle of a response, there is no way how nginx can handle this. (The log line you provided is "... while reading a response header from upstream ..." though, and it should result in 504 returned to a client or next upstream tried if there are multiple upstream servers.) Use of "chunked_transfer_encoding off;" is expected to make things worse, as it makes truncation undetectable if Content-Length isn't known. It should not be used unless there are good reasons too - e.g., you have to support broken clients which use HTTP/1.1 but do not understand chunked transfer encoding (see http://nginx.org/r/chunked_transfer_encoding). -- Maxim Dounin http://nginx.org/ From jayadev at ymail.com Wed Apr 2 16:43:49 2014 From: jayadev at ymail.com (Jayadev C) Date: Wed, 2 Apr 2014 09:43:49 -0700 (PDT) Subject: Keepalive not working with upstream tcp server In-Reply-To: <20140402145308.GG34696@mdounin.ru> References: <1396425667.98298.YahooMailNeo@web163501.mail.gq1.yahoo.com> <20140402145308.GG34696@mdounin.ru> Message-ID: <1396457029.12654.YahooMailNeo@web163505.mail.gq1.yahoo.com> I am setting u->keepalive = 1 in my module too, but let me double check that though. Meanwhile , is the keepalive done by http_upstream_keepalive_module ? Is it included by default or do I need to compile explicitly include that module.? I don't see the code going through that module in my request flow. Jai On Wednesday, April 2, 2014 7:53 AM, Maxim Dounin wrote: Hello! On Wed, Apr 02, 2014 at 01:01:07AM -0700, Jayadev C wrote: > I am trying to use nginx to proxy my requests to a custom tcp > backend server that I have. I am following the same model as the > default memcached module with in the nginx code base (1.5.10) , > the relevant config file attached.? Tried with most of the > keepalive options but I still see new connections getting > created to upstream server. (rather I do see the connection > getting closed by nginx in strace) [...] > ???????????????? my_pass my_backend; Looks like you did something wrong in you module.? Note that memcached module explicitly sets u->keepalive when it's ok to keep a connection alive. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 2 17:21:53 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Wed, 02 Apr 2014 13:21:53 -0400 Subject: Transforming nginx for Windows In-Reply-To: <2b10663b4e075b264f05f5add81b6994.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> <2b10663b4e075b264f05f5add81b6994.NginxMailingListEnglish@forum.nginx.org> Message-ID: <29111e9222c5719f392a61780566f3b4.NginxMailingListEnglish@forum.nginx.org> This is definitely working better now. Thanks for letting me know about the newer version. I didn't notice the newer version because the order the versions were appearing was backwards, but that appear to be corrected now. Anyway, now that I've upgraded, I am running into a different issue. I was hoping you all could tell me if I'm missing something basic... I am trying to run 2 instances of nginx on the same server. They listen on different ips, and I've verified the configs look perfect. I can see the process is only binding to tcp 80 and 443 on the one ip. Then, when I go to start the second nginx server, it fails with this: Assertion failed: ngx_shared_sockets->pid==pid, file src/core/nginx.c, line 376 If I stop nginx instance 1 and start instance2, it starts fine, but when i go and try to start instance 1 now, it then fails with the same error message. Is there something I'm missing here or is there perhaps an issue? These are being run from completely different directories and I can see they are using the pid file in their corresponding directories so I don't expect t's an issue like that. Thanks for your help, Tony Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248934#msg-248934 From nginx-forum at nginx.us Wed Apr 2 17:47:19 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 02 Apr 2014 13:47:19 -0400 Subject: Transforming nginx for Windows In-Reply-To: <29111e9222c5719f392a61780566f3b4.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> <2b10663b4e075b264f05f5add81b6994.NginxMailingListEnglish@forum.nginx.org> <29111e9222c5719f392a61780566f3b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <77d401a384ff2b551ed9c20f95e60440.NginxMailingListEnglish@forum.nginx.org> tonyschwartz Wrote: ------------------------------------------------------- > This is definitely working better now. Thanks for letting me know > about the newer version. I didn't notice the newer version because > the order the versions were appearing was backwards, but that appear > to be corrected now. Good :) yep I've managed to get a proper listing like explorer does. > Assertion failed: ngx_shared_sockets Any pool (shared memory and others) are named, with 2 instances you get 2 pools with the same name which is not allowed, you could hack the name in the .exe for the second instance, but why run 2 instances ? just merge the configs and if needed run more workers, up to 64 workers on 32 cores has been tested. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248935#msg-248935 From reallfqq-nginx at yahoo.fr Wed Apr 2 17:50:43 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 2 Apr 2014 19:50:43 +0200 Subject: Transfering url arguments to different location In-Reply-To: References: Message-ID: Thanks Jonathan! However, not much details on why it is a 'hack' not to be trusted since unreliable. I do not rely on the Wiki anymore, and since Maxim specifically says it should be removed... I hope it will be, to prevent unreliable configurations to be shared and propagated based on unofficial directives. Any decision made yet? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 2 18:05:55 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Wed, 02 Apr 2014 14:05:55 -0400 Subject: Transforming nginx for Windows In-Reply-To: <77d401a384ff2b551ed9c20f95e60440.NginxMailingListEnglish@forum.nginx.org> References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> <2b10663b4e075b264f05f5add81b6994.NginxMailingListEnglish@forum.nginx.org> <29111e9222c5719f392a61780566f3b4.NginxMailingListEnglish@forum.nginx.org> <77d401a384ff2b551ed9c20f95e60440.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have money for one powerful test server. I run two instances for availability reasons. For example, let's say I have an integration test environment and an acceptance test environment... On the integration test environment, I want to be able to tinker with the configs and bring the app up and down much more frequently. On the acceptance test environment, testers are hammering this, so I don't want this to come down as I'm testing the integration environment. Why not buy two servers you might ask? Well, $$. This is a smaller outfit and this saves us lots of money. This is just one simple example of how I use the same server for multiple things. There are plenty of others, but this is my standard practice when I don't have lots of resources for servers. Thanks, Tony Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248937#msg-248937 From scott_ribe at elevated-dev.com Wed Apr 2 18:12:28 2014 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Wed, 2 Apr 2014 12:12:28 -0600 Subject: Transforming nginx for Windows In-Reply-To: References: <7bb5e1c41a64ef81e91fdc361619bed3.NginxMailingListEnglish@forum.nginx.org> <85af071dad2e21906caca711cf295868.NginxMailingListEnglish@forum.nginx.org> <5113f36cab1dda06ed73fdae2bc1e038.NginxMailingListEnglish@forum.nginx.org> <2b10663b4e075b264f05f5add81b6994.NginxMailingListEnglish@forum.nginx.org> <29111e9222c5719f392a61780566f3b4.NginxMailingListEnglish@forum.nginx.org> <77d401a384ff2b551ed9c20f95e60440.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7FFD99FC-F710-43CB-A3DC-CF33DC60D62C@elevated-dev.com> Wouldn't you want to use VMs for that? On Apr 2, 2014, at 12:05 PM, tonyschwartz wrote: > I have money for one powerful test server. I run two instances for > availability reasons. > > For example, let's say I have an integration test environment and an > acceptance test environment... > > On the integration test environment, I want to be able to tinker with the > configs and bring the app up and down much more frequently. > > On the acceptance test environment, testers are hammering this, so I don't > want this to come down as I'm testing the integration environment. > > Why not buy two servers you might ask? Well, $$. This is a smaller outfit > and this saves us lots of money. This is just one simple example of how I > use the same server for multiple things. There are plenty of others, but > this is my standard practice when I don't have lots of resources for > servers. > > Thanks, > > Tony > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248937#msg-248937 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From hari at cpacket.com Wed Apr 2 18:14:51 2014 From: hari at cpacket.com (Hari Miriyala) Date: Wed, 2 Apr 2014 11:14:51 -0700 Subject: Radius and TACACS+ based authentication In-Reply-To: References: Message-ID: Hi, Thanks for reply, I have looked into further and found that basic PAM module support is available (below is the link). How could we extend this to support RADIUS and TACACS+, any thoughts and ideas please? http://web.iti.upv.es/~sto/nginx/ Regards, Hari On Fri, Mar 28, 2014 at 10:22 AM, itpp2012 wrote: > AFAIK there is only a ldap module for nginx. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,248820,248823#msg-248823 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 2 18:30:47 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Wed, 02 Apr 2014 14:30:47 -0400 Subject: Transforming nginx for Windows In-Reply-To: <7FFD99FC-F710-43CB-A3DC-CF33DC60D62C@elevated-dev.com> References: <7FFD99FC-F710-43CB-A3DC-CF33DC60D62C@elevated-dev.com> Message-ID: <271bbe2c6187c61a27050a875e4cdec6.NginxMailingListEnglish@forum.nginx.org> Not really, you'd need another copy of windows depending on the type of vm, extra licensing, etc. I have been doing this kind of thing very happily for many many years. I like doing it this way and have had very good experiences doing it. Most any kind of app will happily run multiple instances. I would suggest nginx for windows should allow it. Perhaps a config entry for this "shared pool name" property can be added to the configs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248941#msg-248941 From nginx-forum at nginx.us Wed Apr 2 19:44:45 2014 From: nginx-forum at nginx.us (abstein2) Date: Wed, 02 Apr 2014 15:44:45 -0400 Subject: More Descriptive 502 Errors Message-ID: <5c4392f638544aafa92a004f7a11c170.NginxMailingListEnglish@forum.nginx.org> Every so often I see a handful of errors in my error log, such as: connect() failed (113: No route to host) upstream timed out (110: Connection timed out) upstream sent too big header while reading response header from upstream etc. in each case, when I log the $status variable in nginx, each just shows as a 502 error. Is there any way to retrieve what the actual error is (via variable?) without having to check the error log or is that the only source for this information? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248943,248943#msg-248943 From nginx-forum at nginx.us Wed Apr 2 23:41:13 2014 From: nginx-forum at nginx.us (jakubp) Date: Wed, 02 Apr 2014 19:41:13 -0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: <20140402122041.GA34696@mdounin.ru> References: <20140402122041.GA34696@mdounin.ru> Message-ID: <4665efe85c7e14012bb1f219756bbe5b.NginxMailingListEnglish@forum.nginx.org> > What currently can be used for such a use case is "inactive=" > parameter of the proxy_cache_path directive (see > http://nginx.org/r/proxy_cache_path). It ensures that items not > recently requested are removed from the cache, including ones > created with proxy_cache_min_uses. Have you tried tuning it? Hi Maxim Thank you for your response. Yes, that is what I do - try to keep balance between inactive time (which I obviously want to keep as high as possible) and the keys zone size. But this is constant/never-ending effort if the traffic pattern is changing (and it unfortunately is for me...). It would be great if nginx used keys size as an additional trigger to forced_expire resources - to auto-adjust the removal aggression when the traffic profile changes. Regards, Kuba Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237829,248946#msg-248946 From makailol7 at gmail.com Thu Apr 3 04:01:08 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Thu, 3 Apr 2014 09:31:08 +0530 Subject: how to use keepalive with Nginx revers proxy? In-Reply-To: <20140402135953.GE34696@mdounin.ru> References: <20140402135953.GE34696@mdounin.ru> Message-ID: Hi Maxim, Thanks for reply. Number of IPs are not fixed so it is not possible to define upstream and map block I think. I am trying to implement completely dynamic configuration using lua module. Is it possible to use variable in upstream block like this? upstream backend { server $IP; keepalive 2; } location / { proxy_pass http://backend; proxy_http_version 1.1; proxy_set_header Connection ""; } Thanks, Makailol On Wed, Apr 2, 2014 at 7:29 PM, Maxim Dounin wrote: > Hello! > > On Wed, Apr 02, 2014 at 06:55:42PM +0530, Makailol Charls wrote: > > > Hi, > > > > Can some one provide me an example to set keep alive connection between > > Nginx(reverse proxy) and backend server? > > > > I can not use upstream module as my backend IP is dynamic based on > > variable. So I can not use keepalive directive of upstream. > > > > I have used below directive in location block. > > proxy_pass http://$IP ; > > Use of keepalive connections require upstream{} block to be > defined, see here for examples: > > http://nginx.org/r/keepalive > > As long as list of backend ip addresses is limited, you may define > appropriate upstream{} blocks for each backend, and use upstream's > name in a variable, e.g.: > > upstream backend1 { > server 192.168.0.1; > keepalive 2; > } > > ... > > map $IP $backend { > 192.168.0.1 backend1; > ... > } > > location / { > proxy_pass http://$backend; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Thu Apr 3 04:14:41 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Thu, 3 Apr 2014 09:44:41 +0530 Subject: [nginx] Vary header is repeated twice in response In-Reply-To: <20140401110415.GU34696@mdounin.ru> References: <20140401110415.GU34696@mdounin.ru> Message-ID: Hi, I have been facing similar issue of header duplication. Is there any solution for this? Thanks, Makailol On Tue, Apr 1, 2014 at 4:34 PM, Maxim Dounin wrote: > Hello! > > On Tue, Apr 01, 2014 at 02:54:56PM +0400, Yury Kirpichev wrote: > > > Hi, > > > > I've got an issue that "Vary" header is repeated twice in response when > > "gzip_vary on" is specified in config file; > > > > My configuration is the following: > > Two instances of nginx are running on different hosts (A and B) > > There is > > location /smth/ { > > proxy_pass http://B/smth; > > } > > > > and > > gzip_vary on; > > in config for host A. > > > > B adds "Vary: Accept-Encoding" in response for http://B/smth > > > > And then if http://A/smth request is performed "Vary" header is returned > > twice in response. > > < Connection: keep-alive > > < Vary: Accept-Encoding > > < Vary: Accept-Encoding > > < date: Tue, 01 Apr 2014 10:02:27 GMT > > < expires: Tue, 01 Apr 2014 10:07:27 GMT > > < server: nginx/1.4.4 > > > > > > Could you please help me to resolve this problem? Is it known issue or it > > is normal behaviour or may be something wrong on my side? > > This is certainly not relevant to nginx-devel@, please use nginx@ > mailing list for such questions (Cc'd). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 3 09:05:57 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 03 Apr 2014 05:05:57 -0400 Subject: Transforming nginx for Windows In-Reply-To: <271bbe2c6187c61a27050a875e4cdec6.NginxMailingListEnglish@forum.nginx.org> References: <7FFD99FC-F710-43CB-A3DC-CF33DC60D62C@elevated-dev.com> <271bbe2c6187c61a27050a875e4cdec6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5bd2458de00c5387ef525005c2e6a7fd.NginxMailingListEnglish@forum.nginx.org> tonyschwartz Wrote: ------------------------------------------------------- > run multiple instances. I would suggest nginx for windows should > allow it. Perhaps a config entry for this "shared pool name" property > can be added to the configs. I've tried this with the plain basic nginx version 1.5.12 which does not support multiple instances either, it used to a long while back in the 1.2.x ranges. You might be able to run something like xp-mode or one of the other free MS vm's which are license free. And when you have an unused xp license you can easily use this with virtualbox. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248951#msg-248951 From nginx-forum at nginx.us Thu Apr 3 11:03:53 2014 From: nginx-forum at nginx.us (dvdnginx) Date: Thu, 03 Apr 2014 07:03:53 -0400 Subject: XSLT, one XML file and differing URIs In-Reply-To: <20140319140820.GX34696@mdounin.ru> References: <20140319140820.GX34696@mdounin.ru> Message-ID: <38447052caeb801f2868c2a5f4421c79.NginxMailingListEnglish@forum.nginx.org> Hi Martin, Thanks I' look into it, sorry about delay in replying I got side tracked! Cheers, Dave. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248495,248954#msg-248954 From nginx-forum at nginx.us Thu Apr 3 11:10:30 2014 From: nginx-forum at nginx.us (brunoa) Date: Thu, 03 Apr 2014 07:10:30 -0400 Subject: multiple CAs in ssl_client_certificate does not work for me Message-ID: <4b39f6219c274349bf2d237f9f2c009f.NginxMailingListEnglish@forum.nginx.org> Hello, I've seen from the doc and from this post (http://forum.nginx.org/read.php?2,229129,229132#msg-229132) that it is possible to specify multiple CAs in ssl_client_certificate directive. I have nginx version 1.1.19. here is my config: server { listen 443; server_name mydomain.com; root /usr/share/nginx/www; ssl on; ssl_certificate /etc/ssl/selfsigned/myssl.crt; ssl_certificate_key /etc/ssl/selfsigned/myssl.key; ssl_client_certificate /etc/ssl/ca.pem; ssl_verify_depth 3; ssl_verify_client on; ssl_ciphers ALL:!ADH:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } } The ca.pem file contains 2 certificates: # cat ca.pem -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- # As far as I can see, the first certificate is checked, but apparently the 2nd isn't. Any idea how I can troubleshoot that ? Thanks, bruno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248955,248955#msg-248955 From mdounin at mdounin.ru Thu Apr 3 11:34:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Apr 2014 15:34:11 +0400 Subject: ngx_slab_alloc() failed: no memory in cache keys zone "zone-xyz" In-Reply-To: <4665efe85c7e14012bb1f219756bbe5b.NginxMailingListEnglish@forum.nginx.org> References: <20140402122041.GA34696@mdounin.ru> <4665efe85c7e14012bb1f219756bbe5b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140403113411.GJ34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 07:41:13PM -0400, jakubp wrote: > > What currently can be used for such a use case is "inactive=" > > parameter of the proxy_cache_path directive (see > > http://nginx.org/r/proxy_cache_path). It ensures that items not > > recently requested are removed from the cache, including ones > > created with proxy_cache_min_uses. Have you tried tuning it? > > Hi Maxim > > Thank you for your response. > Yes, that is what I do - try to keep balance between inactive time (which I > obviously want to keep as high as possible) and the keys zone size. But this In most cases, it doesn't make sense to keep inactive time high. If something isn't requested often enough, it may be better to remove it from cache. > is constant/never-ending effort if the traffic pattern is changing (and it > unfortunately is for me...). It would be great if nginx used keys size as an > additional trigger to forced_expire resources - to auto-adjust the removal > aggression when the traffic profile changes. It does so - if an allocation of a cache node fails, this will trigger a forced expiration of a cache node, and then tries to allocate a node again. This is more an emergency mechanism though (and not guaranteed to work, as another allocation may fail, too), hence alerts are logged in such cases. Recently, it was made possible to avoid slab allocation failures logging, and this is now used by SSL session cache code, as well as limit_req module[1]. This mechanism may be used by proxy_cache to avoid logging, too, yet I'm not yet convinced it actually should. [1] http://hg.nginx.org/nginx/rev/5024d29354f1 -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Apr 3 13:02:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Apr 2014 17:02:31 +0400 Subject: how to use keepalive with Nginx revers proxy? In-Reply-To: References: <20140402135953.GE34696@mdounin.ru> Message-ID: <20140403130231.GL34696@mdounin.ru> Hello! On Thu, Apr 03, 2014 at 09:31:08AM +0530, Makailol Charls wrote: > Hi Maxim, > > Thanks for reply. > > Number of IPs are not fixed so it is not possible to define upstream and > map block I think. I am trying to implement completely dynamic > configuration using lua module. > > > Is it possible to use variable in upstream block like this? > upstream backend { > server $IP; > keepalive 2; > } No, this won't work. > > location / { > proxy_pass http://backend; > proxy_http_version 1.1; > proxy_set_header Connection ""; > } > > > Thanks, > Makailol > > > On Wed, Apr 2, 2014 at 7:29 PM, Maxim Dounin wrote: > > > Hello! > > > > On Wed, Apr 02, 2014 at 06:55:42PM +0530, Makailol Charls wrote: > > > > > Hi, > > > > > > Can some one provide me an example to set keep alive connection between > > > Nginx(reverse proxy) and backend server? > > > > > > I can not use upstream module as my backend IP is dynamic based on > > > variable. So I can not use keepalive directive of upstream. > > > > > > I have used below directive in location block. > > > proxy_pass http://$IP ; > > > > Use of keepalive connections require upstream{} block to be > > defined, see here for examples: > > > > http://nginx.org/r/keepalive > > > > As long as list of backend ip addresses is limited, you may define > > appropriate upstream{} blocks for each backend, and use upstream's > > name in a variable, e.g.: > > > > upstream backend1 { > > server 192.168.0.1; > > keepalive 2; > > } > > > > ... > > > > map $IP $backend { > > 192.168.0.1 backend1; > > ... > > } > > > > location / { > > proxy_pass http://$backend; > > proxy_http_version 1.1; > > proxy_set_header Connection ""; > > } > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Apr 3 13:05:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Apr 2014 17:05:39 +0400 Subject: More Descriptive 502 Errors In-Reply-To: <5c4392f638544aafa92a004f7a11c170.NginxMailingListEnglish@forum.nginx.org> References: <5c4392f638544aafa92a004f7a11c170.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140403130538.GM34696@mdounin.ru> Hello! On Wed, Apr 02, 2014 at 03:44:45PM -0400, abstein2 wrote: > Every so often I see a handful of errors in my error log, such as: > > connect() failed (113: No route to host) > upstream timed out (110: Connection timed out) > upstream sent too big header while reading response header from upstream > > etc. > > in each case, when I log the $status variable in nginx, each just shows as a > 502 error. Is there any way to retrieve what the actual error is (via > variable?) without having to check the error log or is that the only source > for this information? The error log is the only place where error details are logged. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Apr 3 13:08:14 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 3 Apr 2014 15:08:14 +0200 Subject: PCRE named captures sill counted in numerical variables list Message-ID: I tried to configure the following location with something like: location ~* "^/([[:alpha:]]{1,8}(?-[[:alpha:]]{1,8})?)(/.*[^/])?/?$" { try_files $uri $uri/ $2/?lang=$1&$args; } ?However, the $2 variable does not catch the last part of the URI as expected (either it catches the named capture or nothing at all, that I do not know nor care).? ?Using $3 instead of $2 does the job.? ?I thought that using named captures allowed for those capture not to be counted in numerical variable? ?. ? ?Am I wrong expecting that? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Apr 3 13:21:49 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Apr 2014 17:21:49 +0400 Subject: PCRE named captures sill counted in numerical variables list In-Reply-To: References: Message-ID: <20140403132149.GN34696@mdounin.ru> Hello! On Thu, Apr 03, 2014 at 03:08:14PM +0200, B.R. wrote: > I tried to configure the following location with something like: > > location ~* > "^/([[:alpha:]]{1,8}(?-[[:alpha:]]{1,8})?)(/.*[^/])?/?$" { > try_files $uri $uri/ $2/?lang=$1&$args; > } > > ?However, the $2 variable does not catch the last part of the URI as > expected (either it catches the named capture or nothing at all, that I do > not know nor care).? > > ?Using $3 instead of $2 does the job.? > > > ?I thought that using named captures allowed for those capture not to be > counted in numerical variable? > ?. > > ? > ?Am I wrong expecting that? Yes, you are wrong, "man perlre" says: Named groups count in absolute and relative numbering, and so can also be referred to by those numbers. The same does "man pcrepattern": Named capturing parentheses are still allocated numbers as well as names, exactly as if the names were not present. >From pattern point of view, it's just an human-friendly alias for a capture. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Apr 3 13:42:55 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 3 Apr 2014 15:42:55 +0200 Subject: PCRE named captures sill counted in numerical variables list In-Reply-To: <20140403132149.GN34696@mdounin.ru> References: <20140403132149.GN34696@mdounin.ru> Message-ID: ?Thanks Maxim, I just understood my mistake: I was confusing (once again, I think) named captures and subpatterns... What I wanted to use was the OR logic capability of captures while actually not capturing them... The starting question mark of both syntax confused me... If anyone is intereted in it, the intended (and working) syntax is: location ~* "^/([[:alpha:]]{1,8}(?:-[[:alpha:]]{1,8})?)(/.*[^/])?/?$" { try_files $uri $uri/ $2/?lang=$1&$args; } Working on multi-language URIs on webserver-side... ;o) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 3 14:03:16 2014 From: nginx-forum at nginx.us (tonyschwartz) Date: Thu, 03 Apr 2014 10:03:16 -0400 Subject: Transforming nginx for Windows In-Reply-To: <5bd2458de00c5387ef525005c2e6a7fd.NginxMailingListEnglish@forum.nginx.org> References: <7FFD99FC-F710-43CB-A3DC-CF33DC60D62C@elevated-dev.com> <271bbe2c6187c61a27050a875e4cdec6.NginxMailingListEnglish@forum.nginx.org> <5bd2458de00c5387ef525005c2e6a7fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <662fd54aa5b598b812d966b417d26fd9.NginxMailingListEnglish@forum.nginx.org> I hear what you're saying and I'm not trying to take away from your great work on this project. I appreciate it very much. It has proven very useful to me. But, for the long term, I still strongly believe the application should be able to be run multiple times on the same windows server instance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248973#msg-248973 From carsten.germer at intolabs.net Thu Apr 3 15:14:37 2014 From: carsten.germer at intolabs.net (Carsten Germer) Date: Thu, 3 Apr 2014 17:14:37 +0200 Subject: problem with echo_before when proxying a server which sends gzipped content In-Reply-To: References: Message-ID: <799202C5-E1F5-4A1A-899B-0E9052AB7455@intolabs.net> Hey Maxim, yes, it works with suppressing gzip between nginx and source-server with "proxy_set_header Accept-Encoding "deflate";" Thanks a bunch! I was aiming for a solution that preserves the gzip-compression between source and cache, but I'm caching long time, anyway. > add_before_body / add_after_body I did look at those before but I'd still have to find a way to get the $arg_callback to the URIs and output them with echo for the whole solution. Works very fine with the neutered compression, thanks again! Cheers /Carsten --- Carsten Germer Creative Director intolabs GmbH http://www.intolabs.net/ Am 02.04.2014 um 19:21 schrieb nginx-request at nginx.org: > > Date: Wed, 2 Apr 2014 17:46:59 +0400 > From: Maxim Dounin > To: nginx at nginx.org > Subject: Re: problem with echo_before when proxying a server which > sends gzipped content > Message-ID: <20140402134659.GD34696 at mdounin.ru> > Content-Type: text/plain; charset=us-ascii > > Hello! > > On Wed, Apr 02, 2014 at 02:48:33PM +0200, Carsten Germer wrote: > >> ... >> If I append something with echo_after it works fine in browsers >> and in jQuery. >> If I prepend anything with echo_before the answer can't be read >> by browsers, "curl --compressed" throws "curl: (23) Error while >> processing content unencoding: invalid block type". >> >> If I configure Firefox with "about:config" to >> "network.http.accept-encoding:true" it fixes fixes display in >> Firefox. >> >> When I look in the network tab of chrome console I see that >> requesting ".../echo-after/" closes the request after 2Xms. >> Requesting ".../echo-before" also gets 200 ok but never arrives >> fully, is shown as "pending" indefinitely. >> >> My best bet is, that it has something to do with gzip-compressed >> ... > > Something like > > proxy_set_header Accept-Encoding ""; > > in relevant location should help. > > BTW, you may try add_before_body / add_after_body as available in > standard addition filter module instead, see here: > > http://nginx.org/en/docs/http/ngx_http_addition_module.html > > -- > Maxim Dounin > http://nginx.org/ From nginx-forum at nginx.us Thu Apr 3 17:17:11 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 03 Apr 2014 13:17:11 -0400 Subject: Transforming nginx for Windows In-Reply-To: <662fd54aa5b598b812d966b417d26fd9.NginxMailingListEnglish@forum.nginx.org> References: <7FFD99FC-F710-43CB-A3DC-CF33DC60D62C@elevated-dev.com> <271bbe2c6187c61a27050a875e4cdec6.NginxMailingListEnglish@forum.nginx.org> <5bd2458de00c5387ef525005c2e6a7fd.NginxMailingListEnglish@forum.nginx.org> <662fd54aa5b598b812d966b417d26fd9.NginxMailingListEnglish@forum.nginx.org> Message-ID: > very useful to me. But, for the long term, I still strongly believe > the application should be able to be run multiple times on the same > windows server instance. I agree with you but its not strait forward to get this to work that way, I'll put it on the todo list. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,242426,248979#msg-248979 From nginx-forum at nginx.us Fri Apr 4 00:57:24 2014 From: nginx-forum at nginx.us (sean_at_stitcher) Date: Thu, 03 Apr 2014 20:57:24 -0400 Subject: SSL renegotiation probelm using nginx as reverse proxy to apache Message-ID: <595b3ec20a9504591f83026146cb8c2a.NginxMailingListEnglish@forum.nginx.org> My goal is end-to-end encryption of multiple domains using nginx as a reverse proxy to load balance to multiple backends. Both nginx and apache use the same wildcard cert, eg *.domain.com. The first request to https://abc.domain.com/ works as expected, but a call to https://xyz.domain.com produces the following debug output in the apache logs: [Thu Apr 03 17:17:07 2014] [info] Initial (No.1) HTTPS request received for child 0 (server xyz.domain.com:443) [Thu Apr 03 17:17:07 2014] [debug] ssl_engine_kernel.c(423): [client 10.0.0.115] Reconfigured cipher suite will force renegotiation [Thu Apr 03 17:17:07 2014] [info] [client 10.0.0.115] Requesting connection re-negotiation [Thu Apr 03 17:17:07 2014] [debug] ssl_engine_kernel.c(766): [client 10.0.0.115] Performing full renegotiation: complete handshake protocol (client does support secure renegotiation) [Thu Apr 03 17:17:07 2014] [info] [client 10.0.0.115] Awaiting re-negotiation handshake [Thu Apr 03 17:18:07 2014] [error] [client 10.0.0.115] Re-negotiation handshake failed: Not accepted by client!? with the following in the nginx log: 2014/04/03 17:18:07 [error] 29052#0: *355 upstream timed out (110: Connection timed out) while reading response header from upstream, client: 10.0.0.171, server: xyz.domain.com, request: "GET /index.php HTTP/1.1", upstream: "https://10.0.15.101:443/index.php", host: "xyz.domain.com" 2014/04/03 17:18:07 [info] 29052#0: *355 client 10.0.0.171 closed keepalive connection My nginx config looks like this: http { # Header settings - Keep as much original as possible proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-HTTPS on; upstream svhostcluster { server web1.domain.com:443 max_fails=5 fail_timeout=10s; server web2.domain.com:443 max_fails=5 fail_timeout=10s; least_conn; } include /etc/nginx/conf.d/*.conf; } and /etc/nginx/conf.d/servers.conf ssl_certificate_key /etc/pki/tls/private/wildcard.priv.domain.pem; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM; ssl_prefer_server_ciphers on; server { listen *:443; server_name abc.domain.com; access_log /var/log/nginx/abc.domain.access.log; access_log /var/log/nginx/abc.domain.upstream.access.log upstreamlog; error_log /var/log/nginx/sabc.domain.errors.log debug; ssl on; location / { proxy_pass https://svhostcluster; } } server { listen *:443; server_name xyz.domain.com; access_log /var/log/nginx/xyz.domain.access.log; access_log /var/log/nginx/xyz.domain.access.log upstreamlog; error_log /var/log/nginx/xyz.domain.errors.log debug; ssl on; location / { proxy_pass https://svhostcluster; } } on the apache side, here is the ssl.conf LoadModule ssl_module modules/mod_ssl.so Listen *:443 NameVirtualHost *:443 SSLStrictSNIVHostCheck off ServerName abc.domain.com DocumentRoot "/var/www/abc/html" LogLevel debug ErrorLog logs/abc_ssl_error_log CustomLog logs/abc_ssl_access_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" SSLEngine on SSLProtocol all -SSLv2 SSLHonorCipherOrder On SSLCipherSuite ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM SSLCertificateFile /etc/pki/tls/certs/star_domain_com.crt SSLCertificateKeyFile /etc/pki/tls/private/wildcard.priv.domain.pem SSLCertificateChainFile /etc/pki/tls/certs/star_domain_com.crt SSLCACertificateFile /etc/pki/tls/certs/DigiCertCA.crt Options FollowSymLinks AllowOverride All RewriteEngine On Order allow,deny Allow from all ServerName xyz.domain.com DocumentRoot "/var/www/xyz/html" LogLevel debug ErrorLog logs/xyz_ssl_error_log CustomLog logs/xyz_ssl_access_log \ "%t %h %{SSL_PROTOCOL}x %{SSL_CIPHER}x \"%r\" %b" SSLEngine on SSLProtocol all -SSLv2 SSLHonorCipherOrder On SSLCipherSuite ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM SSLCertificateFile /etc/pki/tls/certs/star_domain_com.crt SSLCertificateKeyFile /etc/pki/tls/private/wildcard.priv.domain.pem SSLCertificateChainFile /etc/pki/tls/certs/star_domain_com.crt SSLCACertificateFile /etc/pki/tls/certs/DigiCertCA.crt Options FollowSymLinks AllowOverride All RewriteEngine On Order allow,deny Allow from all I'm not sure I understand why apache wants to renegotiate with nginx, nor why nginx doesn't seem to want to do it (despite apache thinking it can.) Can anyone help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248982,248982#msg-248982 From nginx-forum at nginx.us Fri Apr 4 08:20:04 2014 From: nginx-forum at nginx.us (phil3361) Date: Fri, 04 Apr 2014 04:20:04 -0400 Subject: Need help: websocket proxy stops working after a while In-Reply-To: References: Message-ID: Hi all, The WebSocket protocol specification (RFC 6455) defines a _protocol_level_ keep alive fonctionnality (the protocol defines both a 'ping' and a 'pong' frame). Unfortunately, it seems that nginx doesn't take these frames into account to reset its timeout watchdog timer so far... so that you have to implement some _application_level_ heartbeat mecanism if you don't want your WebSocket connection be broken by an intermediate nginx on low activity periods... This is ennoying: such application work has a quite heavy resource cost, especially on the server side. This looses some of the most valuable benefits of a WebSocket connection: maintaining an active asynchronous notification channel from the server to the client, just in case... Hope this helps, Philippe. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239809,248984#msg-248984 From contact at jpluscplusm.com Fri Apr 4 08:32:37 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Fri, 4 Apr 2014 09:32:37 +0100 Subject: SSL renegotiation probelm using nginx as reverse proxy to apache In-Reply-To: <595b3ec20a9504591f83026146cb8c2a.NginxMailingListEnglish@forum.nginx.org> References: <595b3ec20a9504591f83026146cb8c2a.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 4 Apr 2014 01:57, "sean_at_stitcher" wrote: > I'm not sure I understand why apache wants to renegotiate with nginx, nor > why nginx doesn't seem to want to do it (despite apache thinking it can.) I vaguely recall seeing (on this list) the suggestion that Apache does this (at least) when a request's post-SSL-negotiation, HTTP/layer-7 details change Apache's idea of where/how the request should be handled. If that's happening here, perhaps Apache is seeing your SSL* settings in different vhosts as being different - even though they aren't really. What happens if you move the SSL* directives up a level? Maybe not the on/off flag - just the cipher/cert/key/info ones. HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Apr 4 10:36:49 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Apr 2014 14:36:49 +0400 Subject: Need help: websocket proxy stops working after a while In-Reply-To: References: Message-ID: <20140404103649.GQ34696@mdounin.ru> Hello! On Fri, Apr 04, 2014 at 04:20:04AM -0400, phil3361 wrote: > Hi all, > > The WebSocket protocol specification (RFC 6455) defines a _protocol_level_ > keep alive fonctionnality (the protocol defines both a 'ping' and a 'pong' > frame). > > Unfortunately, it seems that nginx doesn't take these frames into account to > reset its timeout watchdog timer so far... so that you have to implement > some _application_level_ heartbeat mecanism if you don't want your WebSocket > connection be broken by an intermediate nginx on low activity periods... > > This is ennoying: such application work has a quite heavy resource cost, > especially on the server side. This looses some of the most valuable > benefits of a WebSocket connection: maintaining an active asynchronous > notification channel from the server to the client, just in case... You are understanding it wrong. Any ping/pong frame, as well as any other frame in a proxied websocket connection, will reset nginx timeouts. Note that first my response suggests "periodic ping frames from the backend" as one of the possible solutions. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Apr 4 12:05:04 2014 From: nginx-forum at nginx.us (rahul286) Date: Fri, 04 Apr 2014 08:05:04 -0400 Subject: map v/s rewrite performance In-Reply-To: <9ddf25f86385955f9e683b2100ed93f2.NginxMailingListEnglish@forum.nginx.org> References: <8FC386F3-2E32-4199-8389-36724456C1F5@sysoev.ru> <9ddf25f86385955f9e683b2100ed93f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <605a9f7ebb96af46d919615bccf32cd0.NginxMailingListEnglish@forum.nginx.org> @Igor Few Updates: > location = old-url-1 { return 301 new-url-1; } is really nice. We can specify 301/302 using it. But I am reading - http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_valid and now I am thining weather to populate config file with 1000's of lines like below (using automated script, no human efforts involved) > location = old-url-1 { return 301 new-url-1; } OR simply declare > fastcgi_cache_valid 301 302 max; That is: Putting load in config file v/s fastcgi-cache? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248659,248994#msg-248994 From nginx-forum at nginx.us Fri Apr 4 13:28:29 2014 From: nginx-forum at nginx.us (mex) Date: Fri, 04 Apr 2014 09:28:29 -0400 Subject: testssl.sh - script to test your ssl-setup from cli Message-ID: <28b0a97c178547922ca4eaea9e164268.NginxMailingListEnglish@forum.nginx.org> web: https://testssl.sh/ repo: https://bitbucket.org/nginx-goodies/testssl.sh testssl.sh is a free Unix command line tool which checks a server's service on any port for the support of TLS/SSL ciphers, protocols as well as some cryptographic flaws. It's designed to provide clear output for a "is this good or bad" decision. It is working on every Linux distribution which has OpenSSL installed. As for security reasons some distributors outphase the buggy stuff ? and this is exactly you want to check for ? it's recommended to compile OpenSSL by yourself or check out the OpenSSL binaries below (Linux). You will get a warning though if your OpenSSL client cannot perform a specific check, see below. testssl.sh is portable, it is supposed to work on any other Unix system (preferably with GNU tools) and on cygwin, supposed it can find the OpenSSL binary. disclaimer: i'm not the cretator of that script; i'm just maintaing the repo. owner & contact might be found on the webpage https://testssl.sh/ regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248997,248997#msg-248997 From nginx.org at maclemon.at Fri Apr 4 15:04:37 2014 From: nginx.org at maclemon.at (MacLemon) Date: Fri, 4 Apr 2014 17:04:37 +0200 Subject: testssl.sh - script to test your ssl-setup from cli In-Reply-To: <28b0a97c178547922ca4eaea9e164268.NginxMailingListEnglish@forum.nginx.org> References: <28b0a97c178547922ca4eaea9e164268.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is also cipherscan by Julien Vehent (with a bunch of patches by mzeltner and me). https://github.com/mzeltner/cipherscan Original repo doesn't yet include our pull request https://github.com/jvehent/cipherscan It works with any *nix or *tux with OpenSSL. (Tested with Debian, OS X, Solaris and FreeBSD.) You can specify which openssl binary you want to use to enumerate ciphers and protocols. Also gives details about DH parameters, key exchange and PFS. Feedback is welcome! Best regards Pepi From nginx-forum at nginx.us Fri Apr 4 15:28:03 2014 From: nginx-forum at nginx.us (mex) Date: Fri, 04 Apr 2014 11:28:03 -0400 Subject: testssl.sh - script to test your ssl-setup from cli In-Reply-To: References: Message-ID: thanx, nice tool! i integrated this into our ssl-guide https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#testing-ssl-setups Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248997,249000#msg-249000 From kmoe66 at gmail.com Fri Apr 4 19:33:39 2014 From: kmoe66 at gmail.com (Knut Moe) Date: Fri, 4 Apr 2014 13:33:39 -0600 Subject: NginX on Ubuntu 12.04 Message-ID: I am attempting to install NginX on Ubuntu 12.04 using instructions found from the following link: http://wiki.nginx.org/Install but I am getting various error messages. Does anyone have updated instructions for 12.04? Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Fri Apr 4 20:00:20 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 4 Apr 2014 22:00:20 +0200 Subject: NginX on Ubuntu 12.04 In-Reply-To: References: Message-ID: > I am attempting to install NginX on Ubuntu 12.04 using instructions found > from the following link: > > http://wiki.nginx.org/Install > > but I am getting various error messages. > > Does anyone have updated instructions for 12.04? http://nginx.org/en/linux_packages.html#stable From c0nw0nk at hotmail.co.uk Fri Apr 4 20:30:24 2014 From: c0nw0nk at hotmail.co.uk (C0nw0nk W0nky) Date: Fri, 4 Apr 2014 21:30:24 +0100 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: , , , , Message-ID: http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing So i tried sharing my hard drives on windows and serving the content on them from nginx but nginx returns a 404 not found error every time. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen.ooms at stat.ucla.edu Fri Apr 4 20:34:10 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Fri, 4 Apr 2014 13:34:10 -0700 Subject: NginX on Ubuntu 12.04 In-Reply-To: References: Message-ID: On Fri, Apr 4, 2014 at 12:33 PM, Knut Moe wrote: > Does anyone have updated instructions for 12.04? sudo apt-get install nginx From kworthington at gmail.com Fri Apr 4 20:47:55 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 4 Apr 2014 16:47:55 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: Answered your question here: http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing/22872717#22872717 Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Fri, Apr 4, 2014 at 4:30 PM, C0nw0nk W0nky wrote: > > http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing > > > So i tried sharing my hard drives on windows and serving the content on > them from nginx but nginx returns a 404 not found error every time. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 4 20:58:29 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 04 Apr 2014 16:58:29 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: <6347058dc322ed305a7dfe0d4c6ea4a4.NginxMailingListEnglish@forum.nginx.org> Apart from Kevin's answer, if you are running nginx as a service that service must map the drive letter and then start nginx. When you map a drive as a user the service running nginx does not have access to that user-mapped drive. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249011#msg-249011 From nginx-forum at nginx.us Fri Apr 4 21:02:33 2014 From: nginx-forum at nginx.us (sean_at_stitcher) Date: Fri, 04 Apr 2014 17:02:33 -0400 Subject: SSL renegotiation probelm using nginx as reverse proxy to apache In-Reply-To: References: Message-ID: <1aae5305841d9d916bb8f517ead95a17.NginxMailingListEnglish@forum.nginx.org> Brilliant! Thanks so much, I was pulling my hair out on this one. Just goes to show you... never rely on the apache documentation! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248982,249012#msg-249012 From c0nw0nk at hotmail.co.uk Fri Apr 4 22:01:35 2014 From: c0nw0nk at hotmail.co.uk (C0nw0nk W0nky) Date: Fri, 4 Apr 2014 23:01:35 +0100 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: , , , , , , Message-ID: Thanks for the response i posted my config and tried you way but still no luck nginx delivers dynamic content fine but as for the static content that it should be delivering from the mapped hard drive it just keeps saying 404 not found. Date: Fri, 4 Apr 2014 16:47:55 -0400 Subject: Re: Windows | Nginx Mapped Hard Drive | Network Sharing From: kworthington at gmail.com To: nginx at nginx.org Answered your question here: http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing/22872717#22872717 Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Fri, Apr 4, 2014 at 4:30 PM, C0nw0nk W0nky wrote: http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing So i tried sharing my hard drives on windows and serving the content on them from nginx but nginx returns a 404 not found error every time. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Fri Apr 4 22:08:44 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 4 Apr 2014 18:08:44 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: <6A4EFFBF-3D53-4873-9650-5A1D1A12C16D@gmail.com> Replied on SO. Mirroring here: Try removing the root and index blocks from the server block. Leave it only in the location block. -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington > On Apr 4, 2014, at 6:01 PM, C0nw0nk W0nky wrote: > > > Thanks for the response i posted my config and tried you way but still no luck nginx delivers dynamic content fine but as for the static content that it should be delivering from the mapped hard drive it just keeps saying 404 not found. > Date: Fri, 4 Apr 2014 16:47:55 -0400 > Subject: Re: Windows | Nginx Mapped Hard Drive | Network Sharing > From: kworthington at gmail.com > To: nginx at nginx.org > > Answered your question here: > http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing/22872717#22872717 > > > Best regards, > Kevin > -- > Kevin Worthington > kworthington at gmail.com > http://kevinworthington.com/ > http://twitter.com/kworthington > > > On Fri, Apr 4, 2014 at 4:30 PM, C0nw0nk W0nky wrote: > http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing > > > So i tried sharing my hard drives on windows and serving the content on them from nginx but nginx returns a 404 not found error every time. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From c0nw0nk at hotmail.co.uk Fri Apr 4 22:14:04 2014 From: c0nw0nk at hotmail.co.uk (C0nw0nk W0nky) Date: Fri, 4 Apr 2014 23:14:04 +0100 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <6A4EFFBF-3D53-4873-9650-5A1D1A12C16D@gmail.com> References: , , , , , , , , <6A4EFFBF-3D53-4873-9650-5A1D1A12C16D@gmail.com> Message-ID: Same issue removed it from the server block i don't think nginx is compatible with windows network sharing. Subject: Re: Windows | Nginx Mapped Hard Drive | Network Sharing From: kworthington at gmail.com Date: Fri, 4 Apr 2014 18:08:44 -0400 To: nginx at nginx.org Replied on SO. Mirroring here: Try removing the root and index blocks from the server block. Leave it only in the location block. --Kevin Worthingtonkworthington at gmail.comhttp://kevinworthington.com/http://twitter.com/kworthington On Apr 4, 2014, at 6:01 PM, C0nw0nk W0nky wrote: Thanks for the response i posted my config and tried you way but still no luck nginx delivers dynamic content fine but as for the static content that it should be delivering from the mapped hard drive it just keeps saying 404 not found. Date: Fri, 4 Apr 2014 16:47:55 -0400 Subject: Re: Windows | Nginx Mapped Hard Drive | Network Sharing From: kworthington at gmail.com To: nginx at nginx.org Answered your question here: http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing/22872717#22872717 Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Fri, Apr 4, 2014 at 4:30 PM, C0nw0nk W0nky wrote: http://stackoverflow.com/questions/22870814/nginx-mapped-hard-drive-network-sharing So i tried sharing my hard drives on windows and serving the content on them from nginx but nginx returns a 404 not found error every time. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 4 22:32:52 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 04 Apr 2014 18:32:52 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: c0nw0nk Wrote: ------------------------------------------------------- > Same issue removed it from the server block i don't think nginx is > compatible with windows network sharing. It is compatible and works perfectly when done properly, post conf and describe how drives are mapped and in which context nginx is running. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249016#msg-249016 From nginx-forum at nginx.us Fri Apr 4 22:38:03 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 04 Apr 2014 18:38:03 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> Here is my config. server { listen 80; listen [::]:80; server_name domain.com www.domain.com; root z:/server/websites/ps/public_www; index index.php index.html index.htm default.html default.htm; location / { root z:/server/websites/ps/public_www; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8000; expires 3s; max_ranges 0; } location ~ \.flv$ { flv; limit_rate 200k; root z:/server/websites/ps/public_www; expires max; } location ~ \.mp4$ { limit_rate 200k; root z:/server/websites/ps/public_www; expires max; } location ~ \.gif$ { limit_rate 50k; root z:/server/websites/ps/public_www; expires max; } location ~* \.(avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|zip|rar)$ { limit_rate 90k; root z:/server/websites/ps/public_www; expires max; } location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|swf|css|js|txt|zip|rar|xml)$ { root z:/server/websites/ps/public_www; expires max; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { return 404; } location ~ ^/(xampp|security|phpmyadmin|licenses|webalizer|server-status|server-info|cpanel|configuration.php) { return 404; } } For my machine running nginx connecting to my mapped hard drive. The hard drive name is Z:/ I Have apache running on port 8000 for php and html files only serving static content. All dynamic content is served by nginx. But when i go to access a nginx file i get given a 404 not found error. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249017#msg-249017 From nginx-forum at nginx.us Fri Apr 4 22:39:51 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 04 Apr 2014 18:39:51 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> References: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Sorry made a mistake and can't edit my previous post. Nginx handles all static content and Apache handles all dynamic content. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249018#msg-249018 From kworthington at gmail.com Fri Apr 4 22:55:52 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 4 Apr 2014 18:55:52 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Try: server { listen 80; listen [::]:80; server_name domain.com www.domain.com; # removed lines here... location / { root z:/server/websites/ps/public_www; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8000; expires 3s; max_ranges 0; } location ~ \.flv$ { flv; limit_rate 200k; root z:/server/websites/ps/public_www; expires max; } location ~ \.mp4$ { limit_rate 200k; root z:/server/websites/ps/public_www; expires max; } location ~ \.gif$ { limit_rate 50k; root z:/server/websites/ps/public_www; expires max; } location ~* \.(avi|m4v|mov|divx|webm|ogg|mp3|mpeg|mpg|zip|rar)$ { limit_rate 90k; root z:/server/websites/ps/public_www; expires max; } location ~* \.(ico|png|jpg|jpeg|gif|flv|mp4|avi|m4v|mov|divx|webm|ogg| mp3|mpeg|mpg|swf|css|js|txt|zip|rar|xml)$ { root z:/server/websites/ps/public_www; expires max; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { return 404; } location ~ ^/(xampp|security|phpmyadmin|licenses|webalizer|server- status|server-info|cpanel|configuration.php) { return 404; } } Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Fri, Apr 4, 2014 at 6:39 PM, c0nw0nk wrote: > Sorry made a mistake and can't edit my previous post. Nginx handles all > static content and Apache handles all dynamic content. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249008,249018#msg-249018 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 4 22:56:38 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 04 Apr 2014 18:56:38 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ce228df7fe84168396bcaf59ed601f3.NginxMailingListEnglish@forum.nginx.org> Also if it helps my current version of nginx is 1.5.12 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249019#msg-249019 From kworthington at gmail.com Fri Apr 4 22:58:05 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Fri, 4 Apr 2014 18:58:05 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <2ce228df7fe84168396bcaf59ed601f3.NginxMailingListEnglish@forum.nginx.org> References: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> <2ce228df7fe84168396bcaf59ed601f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: I tested it earlier with 1.5.12... Best regards, Kevin -- Kevin Worthington kworthington at gmail.com http://kevinworthington.com/ http://twitter.com/kworthington On Fri, Apr 4, 2014 at 6:56 PM, c0nw0nk wrote: > Also if it helps my current version of nginx is 1.5.12 > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249008,249019#msg-249019 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 4 22:59:50 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 04 Apr 2014 18:59:50 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> References: <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org> Message-ID: I did a simple config: server { listen 80; server_name localhost; root Y:/www.mydomain.nl; index index.php index.html index.htm default.html default.htm; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; #proxy_pass http://127.0.0.1:8000; expires 3s; max_ranges 0; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { return 404; } location ~ ^/(xampp|security|phpmyadmin|licenses|webalizer|server-status|server-info|cpanel|configuration.php) { return 404; } } And accessed http://localhost/Disclaimer.txt which works perfectly, your config is not optimal but the share works as it suppose to. so its definitely a config issue and not a share issue. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249021#msg-249021 From nginx-forum at nginx.us Fri Apr 4 23:00:18 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 04 Apr 2014 19:00:18 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: <15f228ef66c677018448302ab3321ecb.NginxMailingListEnglish@forum.nginx.org> Yep that is my current config and i still recieve a 404 error accessing my static files jpg, mp4, flv etc. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249023#msg-249023 From kmoe66 at gmail.com Fri Apr 4 23:03:02 2014 From: kmoe66 at gmail.com (Knut Moe) Date: Fri, 4 Apr 2014 17:03:02 -0600 Subject: Web GUI Message-ID: I just got NginX installed on Ubuntu and was wondering if there is a Web GUI built-in that can be called from the local IP address with some port number? Thanks again for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 4 23:14:21 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Fri, 04 Apr 2014 19:14:21 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: I think it is a nginx issue it works fine on the localhost and i have full read write and execute access to that hard drive via my remote machine even browsing it and confirm the files exsist nginx keeps saying 404 not found. I think its a bug. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249025#msg-249025 From contact at jpluscplusm.com Fri Apr 4 23:14:49 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 5 Apr 2014 00:14:49 +0100 Subject: Web GUI In-Reply-To: References: Message-ID: On 5 Apr 2014 00:03, "Knut Moe" wrote: > > I just got NginX installed on Ubuntu and was wondering if there is a Web GUI built-in that can be called from the local IP address with some port number? Nginx does not have a GUI built in. It has a status module called stub_status which you can compile in if you wish to get some basic read only information from the running process. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From c0nw0nk at hotmail.co.uk Fri Apr 4 23:27:12 2014 From: c0nw0nk at hotmail.co.uk (C0nw0nk W0nky) Date: Sat, 5 Apr 2014 00:27:12 +0100 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: , , <2333c240704f6a5f6d17433bac51810d.NginxMailingListEnglish@forum.nginx.org>, Message-ID: Try it with a mp4,flv,jpg file. Not documents that have a plain text mime type. > To: nginx at nginx.org > Subject: Re: RE: Windows | Nginx Mapped Hard Drive | Network Sharing > From: nginx-forum at nginx.us > Date: Fri, 4 Apr 2014 18:59:50 -0400 > > I did a simple config: > > server { > listen 80; > server_name localhost; > > root Y:/www.mydomain.nl; > index index.php index.html index.htm default.html default.htm; > location / { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_set_header Host $host; > #proxy_pass http://127.0.0.1:8000; > expires 3s; > max_ranges 0; > } > # deny access to .htaccess files, if Apache's document root > # concurs with nginx's one > # > location ~ /\.ht { > return 404; > } > location ~ > ^/(xampp|security|phpmyadmin|licenses|webalizer|server-status|server-info|cpanel|configuration.php) > { > return 404; > } > } > > And accessed http://localhost/Disclaimer.txt > which works perfectly, your config is not optimal but the share works as it > suppose to. > so its definitely a config issue and not a share issue. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249021#msg-249021 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Apr 5 11:04:17 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 05 Apr 2014 07:04:17 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: ------------------------------------------------------- > Try it with a mp4,flv,jpg file. Not documents that have a plain text > mime type. 127.0.0.1 - - [05/Apr/2014:12:58:46 +0200] "GET /29092007003.mp4 HTTP/1.1" 200 245770434 "-" "Mozilla/5.0 (Windows NT CISNSA; Win32; x86) Gecko/20100101 Firefox/28.0" Also works fine as expected. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249031#msg-249031 From nginx-forum at nginx.us Sat Apr 5 11:54:21 2014 From: nginx-forum at nginx.us (justcyber) Date: Sat, 05 Apr 2014 07:54:21 -0400 Subject: How to limit POST request per ip ? Message-ID: <499900be49fc454f4c05473093a2b793.NginxMailingListEnglish@forum.nginx.org> How to limit POST request per ip ? Need some of: limit_except POST { limit_req zone=postlimit burst=10 nodelay; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249032,249032#msg-249032 From nginx-forum at nginx.us Sat Apr 5 16:58:39 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Sat, 05 Apr 2014 12:58:39 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> http://i633.photobucket.com/albums/uu52/C0nw0nk/Untitled9.png If you look at that picturei think that is why i have a 404 error. Because when i first connect to the drive all works fine i can access the media, Then i restart nginx and it says 404 not found. What is the file path to delete the caches of the windows file shares. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249034#msg-249034 From nginx-forum at nginx.us Sat Apr 5 17:15:59 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 05 Apr 2014 13:15:59 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> Message-ID: c0nw0nk Wrote: ------------------------------------------------------- > Because when i first connect to the drive all works fine i can access > the media, Then i restart nginx and it says 404 not found. > > What is the file path to delete the caches of the windows file shares. There is no cache as such, shares can get in a disconnect state (the infamous red cross), see http://support.microsoft.com/kb/297684 The best way to access shared media is via its UNC, ea. \\192.168.1.10\sharename\media and have guest access enabled. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249035#msg-249035 From nginx-forum at nginx.us Sat Apr 5 17:34:52 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Sat, 05 Apr 2014 13:34:52 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> Message-ID: <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> Thanks for the information and sorry for making so many posts and such a fuss over it. I think i may have found the true culprit behind this silly error. In my Http server section of my nginx config i had this. #open_file_cache max=900000 inactive=10m; #open_file_cache_valid 20m; #open_file_cache_min_uses 1; #open_file_cache_errors on; I nulled out my open file cache and what do you know i can restart nginx as much as i like and no issues. :) So it is open_file_cache that is incompatible with the network sharing feature on windows. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249036#msg-249036 From nginx-forum at nginx.us Sat Apr 5 19:23:20 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 05 Apr 2014 15:23:20 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <892f244a578c9e0c977cb9b9906e444f.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: ------------------------------------------------------- > I nulled out my open file cache and what do you know i can restart > nginx as much as i like and no issues. :) > > So it is open_file_cache that is incompatible with the network sharing > feature on windows. It all depends what kind of host the sharing is done from, for instance here we use Debian(vm) as storage concentrator, nginx connects to Debian and Debian connects and handles unlimited storage units creating a big pool, and we use open_file_cache without issues. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249039#msg-249039 From nginx-forum at nginx.us Sat Apr 5 19:24:36 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 05 Apr 2014 15:24:36 -0400 Subject: [ANN] Windows nginx 1.5.13.1 Snowman Message-ID: 0:18 5-4-2014 nginx 1.5.13.1 Snowman .-= This Is Snowman =-. Here's a little snowman fast and fat, here's it's power as fast as a cat When you run Windows you can hear it shout, take me in try me out! The nginx Snowman release is here! Based on nginx 1.5.13 (3-4-2014) with; + A fix for ssl_session_cache via trac ticket #528, thanks to Maxim! + Stability fixes, more performance tuning + multiple workers now use an api (efficiency and control) + Streaming with nginx-rtmp-module, v1.1.4 (upgraded 3-4-2014) + Naxsi WAF v0.53-1 (upgraded 3-4-2014, conf\naxsi_core.rules id 15+16) + LuaJIT-2.0.3 (upgraded 31-3-2014) Tnx to Mike Pall for his hard work! + lua51.dll (upgraded 31-3-2014) DO NOT FORGET TO REPLACE THIS FILE ! + lua-nginx-module v0.9.7 (upgraded 3-4-2014) + FAQ included in archive + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Additional specifications are like 20:29 18-3-2014 nginx 1.5.12.2 Cheshire Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249040,249040#msg-249040 From mdounin at mdounin.ru Sat Apr 5 22:07:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 Apr 2014 02:07:55 +0400 Subject: How to limit POST request per ip ? In-Reply-To: <499900be49fc454f4c05473093a2b793.NginxMailingListEnglish@forum.nginx.org> References: <499900be49fc454f4c05473093a2b793.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140405220755.GV34696@mdounin.ru> Hello! On Sat, Apr 05, 2014 at 07:54:21AM -0400, justcyber wrote: > How to limit POST request per ip ? > > Need some of: > > limit_except POST { Just a side note: "limit_except POST" means the opposite to what you ask above. > limit_req zone=postlimit burst=10 nodelay; > } It is possible to limit only subset of requests by using the fact that limit_req doesn't limit anything if it's variable evaluates to an empty string, see http://nginx.org/r/limit_req_zone. That is, instead of limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; we need something like limit_req_zone $limit zone=one:10m rate=1r/s; where the $limit variables is empty for non-POST requests (as we don't want to limit them), and evaluates to $binary_remote_addr for POST requests. Such a variable can be easily constructed using the map module (see http://nginx.org/r/map): map $request_method $limit { default ""; POST $binary_remote_addr; } -- Maxim Dounin http://nginx.org/ From igor at sysoev.ru Sun Apr 6 08:01:54 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Sun, 6 Apr 2014 12:01:54 +0400 Subject: map v/s rewrite performance In-Reply-To: <605a9f7ebb96af46d919615bccf32cd0.NginxMailingListEnglish@forum.nginx.org> References: <8FC386F3-2E32-4199-8389-36724456C1F5@sysoev.ru> <9ddf25f86385955f9e683b2100ed93f2.NginxMailingListEnglish@forum.nginx.org> <605a9f7ebb96af46d919615bccf32cd0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9AD369F4-5AB8-4FDB-90FD-7595D7A2AEA1@sysoev.ru> On Apr 4, 2014, at 16:05 , rahul286 wrote: > @Igor > > Few Updates: > >> location = old-url-1 { return 301 new-url-1; } > > is really nice. We can specify 301/302 using it. > > But I am reading - > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#fastcgi_cache_valid > and now I am thining weather to populate config file with 1000's of lines > like below (using automated script, no human efforts involved) > >> location = old-url-1 { return 301 new-url-1; } > > OR simply declare > >> fastcgi_cache_valid 301 302 max; > > That is: Putting load in config file v/s fastcgi-cache? Exact locations are faster. -- Igor Sysoev From nginx-forum at nginx.us Sun Apr 6 13:49:29 2014 From: nginx-forum at nginx.us (rahul286) Date: Sun, 06 Apr 2014 09:49:29 -0400 Subject: map v/s rewrite performance In-Reply-To: <9AD369F4-5AB8-4FDB-90FD-7595D7A2AEA1@sysoev.ru> References: <9AD369F4-5AB8-4FDB-90FD-7595D7A2AEA1@sysoev.ru> Message-ID: <6da143480bc3b89ef50ba8c22bb80512.NginxMailingListEnglish@forum.nginx.org> > Exact locations are faster. Thanks again. We will go with exact locations. :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248659,249046#msg-249046 From nginx-forum at nginx.us Sun Apr 6 16:30:27 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Sun, 06 Apr 2014 12:30:27 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: <892f244a578c9e0c977cb9b9906e444f.NginxMailingListEnglish@forum.nginx.org> References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> <892f244a578c9e0c977cb9b9906e444f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I also did use the Microsoft fix it that you posted itpp2012 so as of what actually fixed it i think it was probably the Microsoft option but i don't need the file cache anyway. Thanks for all the help much appreciated. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249047#msg-249047 From nginx-forum at nginx.us Sun Apr 6 21:53:05 2014 From: nginx-forum at nginx.us (josh11) Date: Sun, 06 Apr 2014 17:53:05 -0400 Subject: Do I need nginx in the web application host? Message-ID: <8acbd6c4546f8d79afbebb16138a547f.NginxMailingListEnglish@forum.nginx.org> If I have a host witn nginx (load balancing and SSL termination), do i need nginx inside the host of each of my webapps? (Assuming I will use cdn for static assets). Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249048,249048#msg-249048 From nginx-forum at nginx.us Sun Apr 6 22:08:51 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Sun, 06 Apr 2014 18:08:51 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> <892f244a578c9e0c977cb9b9906e444f.NginxMailingListEnglish@forum.nginx.org> Message-ID: After doing some more testing there is one way it does not work still. When Nginx runs on windows it runs under your user account. For example my account name is root. So it says nginx is running under the root user. But when i restart the server(local machine) it says nginx is running under the SYSTEM user and that is when all files other than plain text give of a 404 not found error. Its a interesting issue maybe the SYSTEM user group in windows does not have access to the mapped hard drives ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249049#msg-249049 From reallfqq-nginx at yahoo.fr Mon Apr 7 00:32:28 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 7 Apr 2014 02:32:28 +0200 Subject: Do I need nginx in the web application host? In-Reply-To: <8acbd6c4546f8d79afbebb16138a547f.NginxMailingListEnglish@forum.nginx.org> References: <8acbd6c4546f8d79afbebb16138a547f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Nginx can proxy HTTP requests to backend through the proxy module directives . If your backend is able to handle those requests on its own (ie if it is able to handle connections without any webserver), then make nginx as a load-balancer directly talk with it. If your Web applications need support from a webserver, you could use any (nginx being one of the best) to talk with the frontend. Hope I helped, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Apr 7 00:42:14 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 7 Apr 2014 02:42:14 +0200 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> <892f244a578c9e0c977cb9b9906e444f.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have little knowledge about Windows, but I know that the SYSTEM account is usually related to processes running as services. SYSTEM has restricted rights on number of things, despite appearing as a 'super-account', an attempt from Windows to mitigate services 'super-powers'. It would not surprise me if SYSTEM was not authorized to access mapped drives. Try changing the user executing the service or try running nginx as a normal user process. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 7 07:24:37 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 07 Apr 2014 03:24:37 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: <11e79cd453688648774d8d11331a68b2.NginxMailingListEnglish@forum.nginx.org> <533e3cc96dfea0e4b1a83de376661a22.NginxMailingListEnglish@forum.nginx.org> <909e00e2fe1ed8787d72d7c2f16217b4.NginxMailingListEnglish@forum.nginx.org> <892f244a578c9e0c977cb9b9906e444f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6cb4db5422467b3ba21b5d6d079ac7bf.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: > Its a interesting issue maybe the SYSTEM user group in windows does > not have access to the mapped hard drives ? This is default behavior, ea: http://stackoverflow.com/questions/13178892/access-file-from-shared-folder-from-windows-service http://stackoverflow.com/questions/659013/accessing-a-shared-file-unc-from-a-remote-non-trusted-domain-with-credentials OTOH, never run any service as SYSTEM if there is no need for it, always create a user for a service and limits its rights (jail it). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249053#msg-249053 From nginx-forum at nginx.us Mon Apr 7 11:38:00 2014 From: nginx-forum at nginx.us (zajca) Date: Mon, 07 Apr 2014 07:38:00 -0400 Subject: Nodejs websocket 502 bad gateway Message-ID: I'm trying to make work nginx 1.4.7 with nodejs websockets but I'm getting 502 bad gateway NGINX Error: [error] 2394#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 127.0.0.1, server: xxx.cz, request: "GET / HTTP/1.1", upstream: "http://127.0.0.1:8090/", host: "xxx.cz" my conf: upstream xxx { server 127.0.0.1:8090; } # the nginx server instance server { listen 8085; server_name xxx.cz xxx; ssl on; #ssl_certificate /etc/ssl/xxx/xxx.cz.pem; ssl_certificate /etc/ssl/xxx/xxx.cz.crt; ssl_certificate_key /etc/ssl/xxx/xxx.cz.key; access_log /var/log/nginx/xxx.log; # pass the request two the node.js server with the correct headers and much more can be added, see nginx config options location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; #WEBSOCKET proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_pass http://xxx; proxy_redirect off; } } THIS IS?CURL?cmd what I'm using If I use curl directly without nginx it's working fine. curl -i -N -vv -H "Connection: Upgrade" -H "Upgrade: websocket" -H "Host: xxx.cz" -H "Origin: https://xxx.cz" -k https://127.0.0.1:8085 RESULT: * About to connect() to 127.0.0.1 port 8085 (#0) * Trying 127.0.0.1... connected * successfully set certificate verify locations: * CAfile: none CApath: /etc/ssl/certs * SSLv3, TLS handshake, Client hello (1): * SSLv3, TLS handshake, Server hello (2): * SSLv3, TLS handshake, CERT (11): * SSLv3, TLS handshake, Server key exchange (12): * SSLv3, TLS handshake, Server finished (14): * SSLv3, TLS handshake, Client key exchange (16): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSLv3, TLS change cipher, Client hello (1): * SSLv3, TLS handshake, Finished (20): * SSL connection using ECDHE-RSA-AES256-SHA * Server certificate: * subject: serialNumber=aCFUcgALEf6y9h5BHsbSHjMYomt-k6ZQ; OU=GT30082937; OU=See www.rapidssl.com/resources/cps (c)13; OU=Domain Control Validated - RapidSSL(R); CN=xxx.cz * start date: 2013-09-30 06:29:47 GMT * expire date: 2014-10-03 05:32:07 GMT * issuer: C=US; O=GeoTrust, Inc.; CN=RapidSSL CA * SSL certificate verify result: unable to get local issuer certificate (20), continuing anyway. > GET / HTTP/1.1 > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Accept: */* > Connection: Upgrade > Upgrade: websocket > Host: xxx.cz > Origin: https://xxx.cz > < HTTP/1.1 502 Bad Gateway HTTP/1.1 502 Bad Gateway < Server: nginx/1.4.7 Server: nginx/1.4.7 < Date: Mon, 07 Apr 2014 11:19:01 GMT Date: Mon, 07 Apr 2014 11:19:01 GMT < Content-Type: text/html Content-Type: text/html < Content-Length: 172 Content-Length: 172 < Connection: keep-alive Connection: keep-alive < 502 Bad Gateway

502 Bad Gateway


nginx/1.4.7
* Connection #0 to host 127.0.0.1 left intact * Closing connection #0 * SSLv3, TLS alert, Client hello (1): Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249059,249059#msg-249059 From mdounin at mdounin.ru Mon Apr 7 11:53:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Apr 2014 15:53:14 +0400 Subject: Nodejs websocket 502 bad gateway In-Reply-To: References: Message-ID: <20140407115313.GB34696@mdounin.ru> Hello! On Mon, Apr 07, 2014 at 07:38:00AM -0400, zajca wrote: > I'm trying to make work nginx 1.4.7 with nodejs websockets > but I'm getting 502 bad gateway > > NGINX Error: > [error] 2394#0: *1 upstream prematurely closed connection while reading > response header from upstream, client: 127.0.0.1, server: xxx.cz, request: > "GET / HTTP/1.1", upstream: "http://127.0.0.1:8090/", host: "xxx.cz" As per the error message, you backend closes connection for some reason, instead of returning proper 101 Switching Protocols response. You may want to look into your backend to find out why it does so. -- Maxim Dounin http://nginx.org/ From pcgeopc at gmail.com Mon Apr 7 14:34:39 2014 From: pcgeopc at gmail.com (Geo P.C.) Date: Mon, 7 Apr 2014 20:04:39 +0530 Subject: Strange nginx issue Message-ID: We are facing a strange issue on our servers. We have servers with 1GB RAM and some drupal sites are running on it. Generally all sites are loading fine but sometimes we are unable to access any sites. After waiting for 10mts we are getting a 502 gateway timeout error. In middle when we restart either nginx or php5-fpm it will load. Our configurations are as follows: /etc/nginx/nginx.conf: user www-data; worker_processes 1; pid /run/nginx.pid; worker_rlimit_nofile 400000; events { worker_connections 10000; multi_accept on; use epoll; } http { access_log off; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 2; types_hash_max_size 2048; server_tokens off; keepalive_requests 100000; reset_timedout_connection on; port_in_redirect off; client_max_body_size 10m; proxy_connect_timeout 600s; proxy_send_timeout 600s; proxy_read_timeout 600s; fastcgi_send_timeout 600s; fastcgi_read_timeout 600s; open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; /etc/php5/fpm/pool.d/www.conf pm.max_children = 5 pm.start_servers = 2 pm.min_spare_servers = 1 pm.max_spare_servers = 3 ;pm.process_idle_timeout = 10s; ;pm.max_requests = 200 request_terminate_timeout = 300s And please see the added contents in /etc/sysctl.conf ########################## fs.file-max = 150000 net.core.netdev_max_backlog=32768 net.core.optmem_max=20480 #net.core.rmem_default=65536 #net.core.rmem_max=16777216 net.core.somaxconn=50000 #net.core.wmem_default=65536 #net.core.wmem_max=16777216 net.ipv4.tcp_fin_timeout=120 #net.ipv4.tcp_keepalive_intvl=30 #net.ipv4.tcp_keepalive_probes=3 #net.ipv4.tcp_keepalive_time=120 net.ipv4.tcp_max_orphans=262144 net.ipv4.tcp_max_syn_backlog=524288 net.ipv4.tcp_max_tw_buckets=524288 #net.ipv4.tcp_mem=1048576 1048576 2097152 #net.ipv4.tcp_no_metrics_save=1 net.ipv4.tcp_orphan_retries=0 #net.ipv4.tcp_rmem=4096 16384 16777216 #net.ipv4.tcp_synack_retries=2 net.ipv4.tcp_syncookies=1 #net.ipv4.tcp_syn_retries=2 #net.ipv4.tcp_wmem=4096 32768 16777216 ########################## Can anyone please help us on it. Thanks Geo -------------- next part -------------- An HTML attachment was scrubbed... URL: From jayadev at ymail.com Mon Apr 7 14:45:17 2014 From: jayadev at ymail.com (Jayadev C) Date: Mon, 7 Apr 2014 07:45:17 -0700 (PDT) Subject: Adding custom protocol data while creating new keepalive connections Message-ID: <1396881917.22340.YahooMailNeo@web163502.mail.gq1.yahoo.com> Nginx is proxying requests to my custom tcp server. I have my proxy handler to create the right request format and process headers etc. The trouble started when I started using keepalive handler.? I have to add a custom protocol header bytes for every new keepalive connection and skip the header bytes if nginx is using an existing cached one. Currently create_request is called before getting an upstream connection (from default round robin handler) and there seems to be no callback to my module once the connection is selected/created. Any bright ideas ? Should I provide my one connection handlers to do this or my own keepalive handler itself ? Jai -------------- next part -------------- An HTML attachment was scrubbed... URL: From sherlockhugo at gmail.com Mon Apr 7 14:51:31 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Mon, 7 Apr 2014 09:51:31 -0500 Subject: limit_conn_zone Nginx Unknow error Message-ID: What am I doing wrong here? http { limit_conn_zone $binary_remote_addr zone=one:63m; server { location /downloads/ { limit_conn one 10;} [root at batman1 ~]# service nginx configtest nginx: [emerg] the size 66060288 of shared memory zone "one" conflicts with already declared size 0 in /etc/nginx/nginx.conf:60 nginx: configuration file /etc/nginx/nginx.conf test failed I read the nginx manual online, and it look well. I hope that someone have a tip. -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Mon Apr 7 15:03:33 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 07 Apr 2014 16:03:33 +0100 Subject: Strange nginx issue In-Reply-To: References: Message-ID: <26df31bf3e6f9195a2213aec16cd1cc0@swsystem.co.uk> I've just done a drupal7 site under nginx+php-fpm on debian. One thing I noticed was that the php process wasn't closing fast enough, this was tracked down to an issue with mysql. Connections were sitting idle for a long time which basically exhausted the fpm workers on both the web servers. We have 2 mysql nodes doing replication between them and I think the binlog commit was holding this up, adding "innodb_flush_log_at_trx_commit = 0" to the my.cnf stopped this problem from occuring. Page caching is stored in mysql too, moving this to memcached helped massively and reduced the daily binlogs from 5G/day down to a few hundred meg. I'm not sure if this is a strange setup but we have nginx terminating ssl which proxies to varnish which then has 2 additional nginx nodes serving drupal7, these use a 2 node mysql cluster and 2 memcached nodes for caching pages etc. Steve. On 07/04/2014 15:34, Geo P.C. wrote: > We are facing a strange issue on our servers. We have servers with 1GB RAM and some drupal sites are running on it. > > Generally all sites are loading fine but sometimes we are unable to access any sites. After waiting for 10mts we are getting a 502 gateway timeout error. In middle when we restart either nginx or php5-fpm it will load. > > Our configurations are as follows: > > /etc/nginx/nginx.conf: > > user www-data; > > worker_processes 1; > > pid /run/nginx.pid; > > worker_rlimit_nofile 400000; > > events { > > worker_connections 10000; > > multi_accept on; > > use epoll; > > } > > http { > > access_log off; > > sendfile on; > > tcp_nopush on; > > tcp_nodelay on; > > keepalive_timeout 2; > > types_hash_max_size 2048; > > server_tokens off; > > keepalive_requests 100000; > > reset_timedout_connection on; > > port_in_redirect off; > > client_max_body_size 10m; > > proxy_connect_timeout 600s; > > proxy_send_timeout 600s; > > proxy_read_timeout 600s; > > fastcgi_send_timeout 600s; > > fastcgi_read_timeout 600s; > > open_file_cache max=200000 inactive=20s; > > open_file_cache_valid 30s; > > open_file_cache_min_uses 2; > > open_file_cache_errors on; > > /etc/php5/fpm/pool.d/www.conf > > pm.max_children = 5 > > pm.start_servers = 2 > > pm.min_spare_servers = 1 > > pm.max_spare_servers = 3 > > ;pm.process_idle_timeout = 10s; > > ;pm.max_requests = 200 > > request_terminate_timeout = 300s > > And please see the added contents in /etc/sysctl.conf > > ########################## > > fs.file-max = 150000 > > net.core.netdev_max_backlog=32768 > > net.core.optmem_max=20480 > > #net.core.rmem_default=65536 > > #net.core.rmem_max=16777216 > > net.core.somaxconn=50000 > > #net.core.wmem_default=65536 > > #net.core.wmem_max=16777216 > > net.ipv4.tcp_fin_timeout=120 > > #net.ipv4.tcp_keepalive_intvl=30 > > #net.ipv4.tcp_keepalive_probes=3 > > #net.ipv4.tcp_keepalive_time=120 > > net.ipv4.tcp_max_orphans=262144 > > net.ipv4.tcp_max_syn_backlog=524288 > > net.ipv4.tcp_max_tw_buckets=524288 > > #net.ipv4.tcp_mem=1048576 1048576 2097152 > > #net.ipv4.tcp_no_metrics_save=1 > > net.ipv4.tcp_orphan_retries=0 > > #net.ipv4.tcp_rmem=4096 16384 16777216 > > #net.ipv4.tcp_synack_retries=2 > > net.ipv4.tcp_syncookies=1 > > #net.ipv4.tcp_syn_retries=2 > > #net.ipv4.tcp_wmem=4096 32768 16777216 > > ########################## > > Can anyone please help us on it. > > Thanks > > Geo > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From pcgeopc at gmail.com Mon Apr 7 15:18:22 2014 From: pcgeopc at gmail.com (Geo P.C.) Date: Mon, 7 Apr 2014 20:48:22 +0530 Subject: Strange nginx issue In-Reply-To: <26df31bf3e6f9195a2213aec16cd1cc0@swsystem.co.uk> References: <26df31bf3e6f9195a2213aec16cd1cc0@swsystem.co.uk> Message-ID: Thanks Steve for your update. We are using separate mysql server and in this innodb_flush_log_at_trx_commit = 1. This site is running money transactions applications so is it safe to change this option. Also to this mysql server other servers with default nginx and PHP5-fpm configuration is connecting it and they don't have such issues. For optimization we done the above changes and causing this issue. So can you please help me. On Mon, Apr 7, 2014 at 8:33 PM, Steve Wilson wrote: > I've just done a drupal7 site under nginx+php-fpm on debian. > > One thing I noticed was that the php process wasn't closing fast enough, > this was tracked down to an issue with mysql. Connections were sitting idle > for a long time which basically exhausted the fpm workers on both the web > servers. > > We have 2 mysql nodes doing replication between them and I think the > binlog commit was holding this up, adding "innodb_flush_log_at_trx_commit > = 0" to the my.cnf stopped this problem from occuring. > > Page caching is stored in mysql too, moving this to memcached helped > massively and reduced the daily binlogs from 5G/day down to a few hundred > meg. > > I'm not sure if this is a strange setup but we have nginx terminating ssl > which proxies to varnish which then has 2 additional nginx nodes serving > drupal7, these use a 2 node mysql cluster and 2 memcached nodes for caching > pages etc. > > Steve. > > On 07/04/2014 15:34, Geo P.C. wrote: > > We are facing a strange issue on our servers. We have servers with 1GB > RAM and some drupal sites are running on it. > > > > Generally all sites are loading fine but sometimes we are unable to access > any sites. After waiting for 10mts we are getting a 502 gateway timeout > error. In middle when we restart either nginx or php5-fpm it will load. > > > > Our configurations are as follows: > > > > /etc/nginx/nginx.conf: > > > > user www-data; > > worker_processes 1; > > pid /run/nginx.pid; > > worker_rlimit_nofile 400000; > > > > events { > > worker_connections 10000; > > multi_accept on; > > use epoll; > > } > > > > http { > > > > access_log off; > > sendfile on; > > tcp_nopush on; > > tcp_nodelay on; > > keepalive_timeout 2; > > types_hash_max_size 2048; > > server_tokens off; > > keepalive_requests 100000; > > reset_timedout_connection on; > > port_in_redirect off; > > client_max_body_size 10m; > > proxy_connect_timeout 600s; > > proxy_send_timeout 600s; > > proxy_read_timeout 600s; > > fastcgi_send_timeout 600s; > > fastcgi_read_timeout 600s; > > open_file_cache max=200000 inactive=20s; > > open_file_cache_valid 30s; > > open_file_cache_min_uses 2; > > open_file_cache_errors on; > > > > /etc/php5/fpm/pool.d/www.conf > > > > pm.max_children = 5 > > pm.start_servers = 2 > > pm.min_spare_servers = 1 > > pm.max_spare_servers = 3 > > ;pm.process_idle_timeout = 10s; > > ;pm.max_requests = 200 > > request_terminate_timeout = 300s > > > > And please see the added contents in /etc/sysctl.conf > > > > ########################## > > fs.file-max = 150000 > > net.core.netdev_max_backlog=32768 > > net.core.optmem_max=20480 > > #net.core.rmem_default=65536 > > #net.core.rmem_max=16777216 > > net.core.somaxconn=50000 > > #net.core.wmem_default=65536 > > #net.core.wmem_max=16777216 > > net.ipv4.tcp_fin_timeout=120 > > #net.ipv4.tcp_keepalive_intvl=30 > > #net.ipv4.tcp_keepalive_probes=3 > > #net.ipv4.tcp_keepalive_time=120 > > net.ipv4.tcp_max_orphans=262144 > > net.ipv4.tcp_max_syn_backlog=524288 > > net.ipv4.tcp_max_tw_buckets=524288 > > #net.ipv4.tcp_mem=1048576 1048576 2097152 > > #net.ipv4.tcp_no_metrics_save=1 > > net.ipv4.tcp_orphan_retries=0 > > #net.ipv4.tcp_rmem=4096 16384 16777216 > > #net.ipv4.tcp_synack_retries=2 > > net.ipv4.tcp_syncookies=1 > > #net.ipv4.tcp_syn_retries=2 > > #net.ipv4.tcp_wmem=4096 32768 16777216 > > ########################## > > > > Can anyone please help us on it. > > > > Thanks > > > Geo > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Mon Apr 7 15:45:36 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 07 Apr 2014 16:45:36 +0100 Subject: Strange nginx issue In-Reply-To: References: <26df31bf3e6f9195a2213aec16cd1cc0@swsystem.co.uk> Message-ID: <05bc72f186bd5acb383140d3760c5c35@swsystem.co.uk> A quick read at http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit [2] suggests there's a possibility of losing 1s worth of data. I'm not sure if we'd still have a problem with this now we've moved page caching to memcache as that was causing a lot of updates. Unfortunately I'm at work so can't investigate other variables easily at the moment, I'll hopefully have time this evening though. Steve. On 07/04/2014 16:18, Geo P.C. wrote: > Thanks Steve for your update. We are using separate mysql server and in this innodb_flush_log_at_trx_commit = 1. This site is running money transactions applications so is it safe to change this option. > > Also to this mysql server other servers with default nginx and PHP5-fpm configuration is connecting it and they don't have such issues. For optimization we done the above changes and causing this issue. > > So can you please help me. > > On Mon, Apr 7, 2014 at 8:33 PM, Steve Wilson wrote: > > I've just done a drupal7 site under nginx+php-fpm on debian. > > One thing I noticed was that the php process wasn't closing fast enough, this was tracked down to an issue with mysql. Connections were sitting idle for a long time which basically exhausted the fpm workers on both the web servers. > > We have 2 mysql nodes doing replication between them and I think the binlog commit was holding this up, adding "innodb_flush_log_at_trx_commit = 0" to the my.cnf stopped this problem from occuring. > > Page caching is stored in mysql too, moving this to memcached helped massively and reduced the daily binlogs from 5G/day down to a few hundred meg. > > I'm not sure if this is a strange setup but we have nginx terminating ssl which proxies to varnish which then has 2 additional nginx nodes serving drupal7, these use a 2 node mysql cluster and 2 memcached nodes for caching pages etc. > > Steve. > > On 07/04/2014 15:34, Geo P.C. wrote: > > We are facing a strange issue on our servers. We have servers with 1GB RAM and some drupal sites are running on it. > > Generally all sites are loading fine but sometimes we are unable to access any sites. After waiting for 10mts we are getting a 502 gateway timeout error. In middle when we restart either nginx or php5-fpm it will load. > > Our configurations are as follows: > > /etc/nginx/nginx.conf: > > user www-data; > > worker_processes 1; > > pid /run/nginx.pid; > > worker_rlimit_nofile 400000; > > events { > > worker_connections 10000; > > multi_accept on; > > use epoll; > > } > > http { > > access_log off; > > sendfile on; > > tcp_nopush on; > > tcp_nodelay on; > > keepalive_timeout 2; > > types_hash_max_size 2048; > > server_tokens off; > > keepalive_requests 100000; > > reset_timedout_connection on; > > port_in_redirect off; > > client_max_body_size 10m; > > proxy_connect_timeout 600s; > > proxy_send_timeout 600s; > > proxy_read_timeout 600s; > > fastcgi_send_timeout 600s; > > fastcgi_read_timeout 600s; > > open_file_cache max=200000 inactive=20s; > > open_file_cache_valid 30s; > > open_file_cache_min_uses 2; > > open_file_cache_errors on; > > /etc/php5/fpm/pool.d/www.conf > > pm.max_children = 5 > > pm.start_servers = 2 > > pm.min_spare_servers = 1 > > pm.max_spare_servers = 3 > > ;pm.process_idle_timeout = 10s; > > ;pm.max_requests = 200 > > request_terminate_timeout = 300s > > And please see the added contents in /etc/sysctl.conf > > ########################## > > fs.file-max = 150000 > > net.core.netdev_max_backlog=32768 > > net.core.optmem_max=20480 > > #net.core.rmem_default=65536 > > #net.core.rmem_max=16777216 > > net.core.somaxconn=50000 > > #net.core.wmem_default=65536 > > #net.core.wmem_max=16777216 > > net.ipv4.tcp_fin_timeout=120 > > #net.ipv4.tcp_keepalive_intvl=30 > > #net.ipv4.tcp_keepalive_probes=3 > > #net.ipv4.tcp_keepalive_time=120 > > net.ipv4.tcp_max_orphans=262144 > > net.ipv4.tcp_max_syn_backlog=524288 > > net.ipv4.tcp_max_tw_buckets=524288 > > #net.ipv4.tcp_mem=1048576 1048576 2097152 > > #net.ipv4.tcp_no_metrics_save=1 > > net.ipv4.tcp_orphan_retries=0 > > #net.ipv4.tcp_rmem=4096 16384 16777216 > > #net.ipv4.tcp_synack_retries=2 > > net.ipv4.tcp_syncookies=1 > > #net.ipv4.tcp_syn_retries=2 > > #net.ipv4.tcp_wmem=4096 32768 16777216 > > ########################## > > Can anyone please help us on it. > > Thanks > > Geo > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx [1] -- Steve Wilson IT Team Leader - Pirate Party UK OpenDNS 2012 Sysadmin Awards: Flying Solo - Winner +44.7751874508 Pirate Party UK is a political party registered with the Electoral Commission. Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx [2] http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 7 16:02:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Apr 2014 20:02:35 +0400 Subject: limit_conn_zone Nginx Unknow error In-Reply-To: References: Message-ID: <20140407160235.GN34696@mdounin.ru> Hello! On Mon, Apr 07, 2014 at 09:51:31AM -0500, Raul Hugo wrote: > What am I doing wrong here? > > http { > limit_conn_zone $binary_remote_addr zone=one:63m; > > server { > location /downloads/ { > limit_conn one 10;} > > [root at batman1 ~]# service nginx configtest > nginx: [emerg] the size 66060288 of shared memory zone "one" conflicts > with already declared size 0 in /etc/nginx/nginx.conf:60 > nginx: configuration file /etc/nginx/nginx.conf test failed > > > I read the nginx manual online, and it look well. I hope that someone have > a tip. >From the message it looks like you've tried to use limit_conn before limit_conn_zone is defined (probably indirectly by using the "include" directive), i.e. wrote something like limit_conn one 10; limit_conn_zone $binary_remote_addr zone=one:63m; -- Maxim Dounin http://nginx.org/ From sherlockhugo at gmail.com Mon Apr 7 16:17:40 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Mon, 7 Apr 2014 11:17:40 -0500 Subject: limit_conn_zone Nginx Unknow error In-Reply-To: <20140407160235.GN34696@mdounin.ru> References: <20140407160235.GN34696@mdounin.ru> Message-ID: Hey Maxim, thx for your answer. On my /etc/nginx/nginx.conf I put this: limit_conn_zone $binary_remote_addr zone=one:63m; And on my .conf of my project located on /etc/nginx/vhost.d/myproject.conf I put this : on the server configuration: location / { limit_conn one 10; } Nginx read the include first, if this line it before the limit_conn_zone directive? 2014-04-07 11:02 GMT-05:00 Maxim Dounin : > Hello! > > On Mon, Apr 07, 2014 at 09:51:31AM -0500, Raul Hugo wrote: > > > What am I doing wrong here? > > > > http { > > limit_conn_zone $binary_remote_addr zone=one:63m; > > > > server { > > location /downloads/ { > > limit_conn one 10;} > > > > [root at batman1 ~]# service nginx configtest > > nginx: [emerg] the size 66060288 of shared memory zone "one" conflicts > > with already declared size 0 in /etc/nginx/nginx.conf:60 > > nginx: configuration file /etc/nginx/nginx.conf test failed > > > > > > I read the nginx manual online, and it look well. I hope that someone > have > > a tip. > > From the message it looks like you've tried to use limit_conn > before limit_conn_zone is defined (probably indirectly by using > the "include" directive), i.e. wrote something like > > limit_conn one 10; > limit_conn_zone $binary_remote_addr zone=one:63m; > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 7 17:01:34 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Mon, 07 Apr 2014 13:01:34 -0400 Subject: Windows | Nginx Mapped Hard Drive | Network Sharing In-Reply-To: References: Message-ID: <5a2a0e769fa3799d748c16fcf405c2ca.NginxMailingListEnglish@forum.nginx.org> So does anyone know how to edit the SYSTEM account privileges if not i have a way around it anyway. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249008,249082#msg-249082 From mdounin at mdounin.ru Mon Apr 7 17:13:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Apr 2014 21:13:53 +0400 Subject: limit_conn_zone Nginx Unknow error In-Reply-To: References: <20140407160235.GN34696@mdounin.ru> Message-ID: <20140407171353.GR34696@mdounin.ru> Hello! On Mon, Apr 07, 2014 at 11:17:40AM -0500, Raul Hugo wrote: > Hey Maxim, thx for your answer. > > On my /etc/nginx/nginx.conf I put this: > > limit_conn_zone $binary_remote_addr zone=one:63m; > > And on my .conf of my project located on /etc/nginx/vhost.d/myproject.conf > > I put this : > > on the server configuration: > > location / { > > limit_conn one 10; > > } > > Nginx read the include first, if this line it before the limit_conn_zone > directive? Include will, literally, include contents of its argument. That is, something like include /path/to/file/with/limit_conn; limit_conn_zone ... is essentially equivalent to limit_conn_zone ... You have to define limit_conn_zone before it's used, and hence before the include of the server configuration. > 2014-04-07 11:02 GMT-05:00 Maxim Dounin : > > > Hello! > > > > On Mon, Apr 07, 2014 at 09:51:31AM -0500, Raul Hugo wrote: > > > > > What am I doing wrong here? > > > > > > http { > > > limit_conn_zone $binary_remote_addr zone=one:63m; > > > > > > server { > > > location /downloads/ { > > > limit_conn one 10;} > > > > > > [root at batman1 ~]# service nginx configtest > > > nginx: [emerg] the size 66060288 of shared memory zone "one" conflicts > > > with already declared size 0 in /etc/nginx/nginx.conf:60 > > > nginx: configuration file /etc/nginx/nginx.conf test failed > > > > > > > > > I read the nginx manual online, and it look well. I hope that someone > > have > > > a tip. > > > > From the message it looks like you've tried to use limit_conn > > before limit_conn_zone is defined (probably indirectly by using > > the "include" directive), i.e. wrote something like > > > > limit_conn one 10; > > limit_conn_zone $binary_remote_addr zone=one:63m; > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > Un abrazo! > > > *Ra?l Hugo * > > > *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. > #961-710-096 Linux Registered User #482081 - http://counter.li.org/ > P Antes de imprimir este e-mail piense bien si es > necesario hacerlo* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From sherlockhugo at gmail.com Mon Apr 7 17:16:19 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Mon, 7 Apr 2014 12:16:19 -0500 Subject: limit_conn_zone Nginx Unknow error In-Reply-To: <20140407171353.GR34696@mdounin.ru> References: <20140407160235.GN34696@mdounin.ru> <20140407171353.GR34696@mdounin.ru> Message-ID: Thx! I resolve my miss configuration. Only changing of position my include to the final of file. 2014-04-07 12:13 GMT-05:00 Maxim Dounin : > Hello! > > On Mon, Apr 07, 2014 at 11:17:40AM -0500, Raul Hugo wrote: > > > Hey Maxim, thx for your answer. > > > > On my /etc/nginx/nginx.conf I put this: > > > > limit_conn_zone $binary_remote_addr zone=one:63m; > > > > And on my .conf of my project located on > /etc/nginx/vhost.d/myproject.conf > > > > I put this : > > > > on the server configuration: > > > > location / { > > > > limit_conn one 10; > > > > } > > > > Nginx read the include first, if this line it before the limit_conn_zone > > directive? > > Include will, literally, include contents of its argument. That > is, something like > > include /path/to/file/with/limit_conn; > limit_conn_zone ... > > is essentially equivalent to > > > limit_conn_zone ... > > You have to define limit_conn_zone before it's used, and hence > before the include of the server configuration. > > > 2014-04-07 11:02 GMT-05:00 Maxim Dounin : > > > > > Hello! > > > > > > On Mon, Apr 07, 2014 at 09:51:31AM -0500, Raul Hugo wrote: > > > > > > > What am I doing wrong here? > > > > > > > > http { > > > > limit_conn_zone $binary_remote_addr zone=one:63m; > > > > > > > > server { > > > > location /downloads/ { > > > > limit_conn one 10;} > > > > > > > > [root at batman1 ~]# service nginx configtest > > > > nginx: [emerg] the size 66060288 of shared memory zone "one" > conflicts > > > > with already declared size 0 in /etc/nginx/nginx.conf:60 > > > > nginx: configuration file /etc/nginx/nginx.conf test failed > > > > > > > > > > > > I read the nginx manual online, and it look well. I hope that someone > > > have > > > > a tip. > > > > > > From the message it looks like you've tried to use limit_conn > > > before limit_conn_zone is defined (probably indirectly by using > > > the "include" directive), i.e. wrote something like > > > > > > limit_conn one 10; > > > limit_conn_zone $binary_remote_addr zone=one:63m; > > > > > > -- > > > Maxim Dounin > > > http://nginx.org/ > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > -- > > Un abrazo! > > > > > > *Ra?l Hugo * > > > > > > *Miembro Asociadohttp://apesol.org.pe SysAdmin > Cel. > > #961-710-096 Linux Registered User #482081 - http://counter.li.org/ > > P Antes de imprimir este e-mail piense bien si > es > > necesario hacerlo* > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Mon Apr 7 18:35:05 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 07 Apr 2014 19:35:05 +0100 Subject: Strange nginx issue In-Reply-To: <05bc72f186bd5acb383140d3760c5c35@swsystem.co.uk> References: <26df31bf3e6f9195a2213aec16cd1cc0@swsystem.co.uk> <05bc72f186bd5acb383140d3760c5c35@swsystem.co.uk> Message-ID: <5342EFD9.2020807@swsystem.co.uk> On 07/04/2014 16:45, Steve Wilson wrote: > A quick read at > http://dev.mysql.com/doc/refman/4.1/en/innodb-parameters.html#sysvar_innodb_flush_log_at_trx_commit > suggests there's a possibility of losing 1s worth of data. I'm not sure > if we'd still have a problem with this now we've moved page caching to > memcache as that was causing a lot of updates. > > Unfortunately I'm at work so can't investigate other variables easily at > the moment, I'll hopefully have time this evening though. > > Steve. > .... I guess that link was 4.1 specific, I re-read and ended up changing the option back to 1 but also added in the sync_binlog=1 option. So far I've not seen any sql connections sitting doing nothing. It might be worth doing a "show processlist" in sql when the problem occurs to confirm if this is actually where the problem lies. Steve From nginx-forum at nginx.us Mon Apr 7 21:29:15 2014 From: nginx-forum at nginx.us (abstein2) Date: Mon, 07 Apr 2014 17:29:15 -0400 Subject: Upstream Keepalive Questions Message-ID: <01f7abc09dc54a466879fdffba397bbe.NginxMailingListEnglish@forum.nginx.org> I'm somewhat unclear about how the keepalive functionality works within the upstream module. My nginx install currently handles several hundred domains all of which point to different origin servers. I would imagine I can improve performance by enabling keepalive, however the documentation says "The connections parameter sets the maximum number of idle keepalive connections to upstream servers that are preserved in the cache of each worker process. When this number is exceeded, the least recently used connections are closed. " Does that mean that if I have 10 domains and then set keepalive to 32, that there will potentially be up to 320 open connections from my server to the backend servers per worker at any given point or would the worker share all upstreams and only open a total of 32 regardless of how many upstream blocks were on the website? Also, does the number of keepalive connections have anything to do with the number of cores on a box? Also, is there any downside to having a large number of upstreams in the http block? I know for "map" block there's no performance degradation since they're only evaluated on demand, but I don't see any kind of documentation regarding how upstreams are handled. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249089,249089#msg-249089 From nginx-forum at nginx.us Tue Apr 8 07:19:12 2014 From: nginx-forum at nginx.us (zajca) Date: Tue, 08 Apr 2014 03:19:12 -0400 Subject: Nodejs websocket 502 bad gateway In-Reply-To: <20140407115313.GB34696@mdounin.ru> References: <20140407115313.GB34696@mdounin.ru> Message-ID: Solved: It was my bad I had domain without ssl binded to same port as this one with ssl. So it wasn't working. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249059,249095#msg-249095 From mdounin at mdounin.ru Tue Apr 8 09:26:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Apr 2014 13:26:11 +0400 Subject: Upstream Keepalive Questions In-Reply-To: <01f7abc09dc54a466879fdffba397bbe.NginxMailingListEnglish@forum.nginx.org> References: <01f7abc09dc54a466879fdffba397bbe.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140408092611.GV34696@mdounin.ru> Hello! On Mon, Apr 07, 2014 at 05:29:15PM -0400, abstein2 wrote: > I'm somewhat unclear about how the keepalive functionality works within the > upstream module. My nginx install currently handles several hundred domains > all of which point to different origin servers. I would imagine I can > improve performance by enabling keepalive, however the documentation says > "The connections parameter sets the maximum number of idle keepalive > connections to upstream servers that are preserved in the cache of each > worker process. When this number is exceeded, the least recently used > connections are closed. " > > Does that mean that if I have 10 domains and then set keepalive to 32, that > there will potentially be up to 320 open connections from my server to the > backend servers per worker at any given point or would the worker share all > upstreams and only open a total of 32 regardless of how many upstream blocks > were on the website? Also, does the number of keepalive connections have > anything to do with the number of cores on a box? Each upstream block has it's own connection cache. There is no "generic" connection cache to be used with all upstream blocks. > Also, is there any downside to having a large number of upstreams in the > http block? I know for "map" block there's no performance degradation since > they're only evaluated on demand, but I don't see any kind of documentation > regarding how upstreams are handled. Unless you use variables to dynamically select an upstream server, it is resolved while parsing configuration and there is no performance degradation at runtime regardless of the number of upstream blocks you have. -- Maxim Dounin http://nginx.org/ From thijskoerselman at gmail.com Tue Apr 8 09:43:58 2014 From: thijskoerselman at gmail.com (Thijs Koerselman) Date: Tue, 8 Apr 2014 11:43:58 +0200 Subject: Deprecated warnings in 1.5.8 are now errors in 1.5.12 Message-ID: Hi, I'm trying to compile 1.5.12 on OSX. For some reason 1.5.12 generates errors in make where in 1.5.8 these same messages appeared as warnings and were ignored. I'm trying to build the nginx core without extra modules. Below is my configure output and the first errors that appear. Any idea how I can get around this? Thijs checking for OS + Darwin 13.1.0 x86_64 checking for C compiler ... found + using Clang C compiler + clang version: 5.1 (clang-503.0.38) (based on LLVM 3.4svn) checking for gcc builtin atomic operations ... found checking for C99 variadic macros ... found checking for gcc variadic macros ... found checking for unistd.h ... found checking for inttypes.h ... found checking for limits.h ... found checking for sys/filio.h ... found checking for sys/param.h ... found checking for sys/mount.h ... found checking for sys/statvfs.h ... found checking for crypt.h ... not found checking for Darwin specific features + kqueue found checking for kqueue's EVFILT_TIMER ... found checking for Darwin 64-bit kqueue millisecond timeout bug ... not found checking for sendfile() ... found checking for atomic(3) ... found checking for nobody group ... found checking for poll() ... found checking for /dev/poll ... not found checking for crypt() ... found checking for F_READAHEAD ... not found checking for posix_fadvise() ... not found checking for O_DIRECT ... not found checking for F_NOCACHE ... found checking for directio() ... not found checking for statfs() ... found checking for statvfs() ... found checking for dlopen() ... found checking for sched_yield() ... found checking for SO_SETFIB ... not found checking for SO_ACCEPTFILTER ... not found checking for TCP_DEFER_ACCEPT ... not found checking for TCP_KEEPIDLE ... not found checking for TCP_FASTOPEN ... not found checking for TCP_INFO ... not found checking for accept4() ... not found checking for int size ... 4 bytes checking for long size ... 8 bytes checking for long long size ... 8 bytes checking for void * size ... 8 bytes checking for uint64_t ... found checking for sig_atomic_t ... found checking for sig_atomic_t size ... 4 bytes checking for socklen_t ... found checking for in_addr_t ... found checking for in_port_t ... found checking for rlim_t ... found checking for uintptr_t ... uintptr_t found checking for system byte ordering ... little endian checking for size_t size ... 8 bytes checking for off_t size ... 8 bytes checking for time_t size ... 8 bytes checking for setproctitle() ... not found checking for pread() ... found checking for pwrite() ... found checking for sys_nerr ... found checking for localtime_r() ... found checking for posix_memalign() ... found checking for memalign() ... not found checking for mmap(MAP_ANON|MAP_SHARED) ... found checking for mmap("/dev/zero", MAP_SHARED) ... found but is not working checking for System V shared memory ... found checking for POSIX semaphores ... found but is not working checking for POSIX semaphores in libpthread ... found but is not working checking for POSIX semaphores in librt ... not found checking for struct msghdr.msg_control ... found checking for ioctl(FIONBIO) ... found checking for struct tm.tm_gmtoff ... found checking for struct dirent.d_namlen ... found checking for struct dirent.d_type ... found checking for sysconf(_SC_NPROCESSORS_ONLN) ... found checking for openat(), fstatat() ... not found checking for getaddrinfo() ... found checking for PCRE library ... found checking for PCRE JIT support ... not found checking for md5 in system md library ... not found checking for md5 in system md5 library ... not found checking for md5 in system OpenSSL crypto library ... found checking for sha1 in system md library ... not found checking for sha1 in system OpenSSL crypto library ... found checking for zlib library ... found creating objs/Makefile Configuration summary + using system PCRE library + OpenSSL library is not used + md5: using system crypto library + sha1: using system crypto library + using system zlib library *src/core/ngx_crypt.c:82:5: **error: **'MD5_Init' is deprecated: first deprecated in OS X 10.7 [-Werror,-Wdeprecated-declarations]* ngx_md5_init(&md5); * ^* *src/core/ngx_md5.h:30:25: note: *expanded from macro 'ngx_md5_init' #define ngx_md5_init MD5_Init * ^* */usr/include/openssl/md5.h:113:5: note: *'MD5_Init' declared here int MD5_Init(MD5_CTX *c) DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER; * ^* *src/core/ngx_crypt.c:83:5: **error: **'MD5_Update' is deprecated: first deprecated in OS X 10.7 [-Werror,-Wdeprecated-declarations]* ngx_md5_update(&md5, key, keylen); * ^* *src/core/ngx_md5.h:31:25: note: *expanded from macro 'ngx_md5_update' #define ngx_md5_update MD5_Update * ^* */usr/include/openssl/md5.h:114:5: note: *'MD5_Update' declared here int MD5_Update(MD5_CTX *c, const void *data, size_t len) DEPRECATED_IN_MAC_OS_X_VERSION_10_7_AND_LATER; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 8 10:22:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Apr 2014 14:22:39 +0400 Subject: Deprecated warnings in 1.5.8 are now errors in 1.5.12 In-Reply-To: References: Message-ID: <20140408102238.GW34696@mdounin.ru> Hello! On Tue, Apr 08, 2014 at 11:43:58AM +0200, Thijs Koerselman wrote: > Hi, > > I'm trying to compile 1.5.12 on OSX. For some reason 1.5.12 generates > errors in make where in 1.5.8 these same messages appeared as warnings and > were ignored. > > I'm trying to build the nginx core without extra modules. Below is my > configure output and the first errors that appear. Any idea how I can get > around this? > > Thijs > > checking for OS > > + Darwin 13.1.0 x86_64 [...] > + OpenSSL library is not used > > + md5: using system crypto library [...] > *src/core/ngx_crypt.c:82:5: **error: **'MD5_Init' is deprecated: first > deprecated in OS X 10.7 [-Werror,-Wdeprecated-declarations]* > > ngx_md5_init(&md5); Apple deprecated OpenSSL library they have in base system a while ago (including MD5 interface), and in 10.9 they additionally broke their own version handling framework we've used to silence these deprecation warnings. The warning should go away once you'll compile with non-system OpenSSL. Alternatively, trivial solution is to use: ./configure --with-cc-opt="-Wno-deprecated-declarations" Previously these warnings was ignored as -Werror wasn't enabled by default with clang (http://hg.nginx.org/nginx/rev/c86dd32573c0). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Apr 8 10:50:33 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 08 Apr 2014 06:50:33 -0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug Message-ID: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> A missing bounds check in the handling of the TLS heartbeat extension can be used to reveal up to 64k of memory to a connected client or server. https://www.openssl.org/news/secadv_20140407.txt http://heartbleed.com/ http://www.reddit.com/r/netsec/comments/22gym6/diagnosis_of_the_openssl_heartbleed_bug/ http://security.stackexchange.com/search?q=heartbleed regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249102,249102#msg-249102 From thijskoerselman at gmail.com Tue Apr 8 11:13:44 2014 From: thijskoerselman at gmail.com (Thijs Koerselman) Date: Tue, 8 Apr 2014 13:13:44 +0200 Subject: Deprecated warnings in 1.5.8 are now errors in 1.5.12 In-Reply-To: <20140408102238.GW34696@mdounin.ru> References: <20140408102238.GW34696@mdounin.ru> Message-ID: Thanks a lot Maxim, that explains everything. I used the flag and all went well. Cheers, Thijs On Tue, Apr 8, 2014 at 12:22 PM, Maxim Dounin wrote: > Hello! > > On Tue, Apr 08, 2014 at 11:43:58AM +0200, Thijs Koerselman wrote: > > > Hi, > > > > I'm trying to compile 1.5.12 on OSX. For some reason 1.5.12 generates > > errors in make where in 1.5.8 these same messages appeared as warnings > and > > were ignored. > > > > I'm trying to build the nginx core without extra modules. Below is my > > configure output and the first errors that appear. Any idea how I can get > > around this? > > > > Thijs > > > > checking for OS > > > > + Darwin 13.1.0 x86_64 > > [...] > > > + OpenSSL library is not used > > > > + md5: using system crypto library > > [...] > > > *src/core/ngx_crypt.c:82:5: **error: **'MD5_Init' is deprecated: first > > deprecated in OS X 10.7 [-Werror,-Wdeprecated-declarations]* > > > > ngx_md5_init(&md5); > > Apple deprecated OpenSSL library they have in base system a while > ago (including MD5 interface), and in 10.9 they additionally > broke their own version handling framework we've used to silence > these deprecation warnings. > > The warning should go away once you'll compile with non-system > OpenSSL. Alternatively, trivial solution is to use: > > ./configure --with-cc-opt="-Wno-deprecated-declarations" > > Previously these warnings was ignored as -Werror wasn't enabled by > default with clang (http://hg.nginx.org/nginx/rev/c86dd32573c0). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 8 12:55:02 2014 From: nginx-forum at nginx.us (Callumpy) Date: Tue, 08 Apr 2014 08:55:02 -0400 Subject: Case insensitive location Message-ID: <52e56327b95b474e6fa81f409acfd862.NginxMailingListEnglish@forum.nginx.org> Hello there, i'm hoping someone can help me out with this. I've tried many different things but none have worked for me so far. Here is my location block: location ^~ /card/ { root /home/site/public; rewrite ^/card/([a-zA-Z0-9_+]+)/(.*).png$ /card.php?name=$2&type=$1 last; expires epoch; } Some requests for images are being made with a capital 'C' in 'Card' and are causing 404s. I've tried changing the location to "~* ^/card/" but that didn't work, is there anything else I could try? Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249105,249105#msg-249105 From reallfqq-nginx at yahoo.fr Tue Apr 8 13:31:20 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 8 Apr 2014 15:31:20 +0200 Subject: Case insensitive location In-Reply-To: <52e56327b95b474e6fa81f409acfd862.NginxMailingListEnglish@forum.nginx.org> References: <52e56327b95b474e6fa81f409acfd862.NginxMailingListEnglish@forum.nginx.org> Message-ID: As the location docsspecifies, the solution you provided should work. Ensure that: 1?) You are working with the right binary 2?) Configuration syntax is correct (nginx -t) and reload was successful (no message in that direction in the error logs), thus effectively reloading the configuration 3?) Requests starting with C and returning errors are matching the regex being used 4?) The precedence of location syntax does not make another location block capturing the request (since requests are matched by one and only one location block every time), thus having the requjest served incorrectly (and returning 404 since content could not be found in the serving location block) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 8 14:34:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Apr 2014 18:34:08 +0400 Subject: nginx-1.5.13 Message-ID: <20140408143408.GC34696@mdounin.ru> Changes with nginx 1.5.13 08 Apr 2014 *) Change: improved hash table handling; the default values of the "variables_hash_max_size" and "types_hash_bucket_size" were changed to 1024 and 64 respectively. *) Feature: the ngx_http_mp4_module now supports the "end" argument. *) Feature: byte ranges support in the ngx_http_mp4_module and while saving responses to cache. *) Bugfix: alerts "ngx_slab_alloc() failed: no memory" no longer logged when using shared memory in the "ssl_session_cache" directive and in the ngx_http_limit_req_module. *) Bugfix: the "underscores_in_headers" directive did not allow underscore as a first character of a header. Thanks to Piotr Sikora. *) Bugfix: cache manager might hog CPU on exit in nginx/Windows. *) Bugfix: nginx/Windows terminated abnormally if the "ssl_session_cache" directive was used with the "shared" parameter. *) Bugfix: in the ngx_http_spdy_module. -- Maxim Dounin http://nginx.org/en/donation.html From spameden at gmail.com Tue Apr 8 16:11:20 2014 From: spameden at gmail.com (spameden) Date: Tue, 8 Apr 2014 20:11:20 +0400 Subject: failed to verify repository GPG key Message-ID: today I noticed something changed and there is no longer packages being verified from nginx.org repository: W: GPG error: http://nginx.org squeeze Release: The following signatures were invalid: BADSIG ABF5BD827BD9BF62 nginx signing key < signing-key at nginx.com> did you guys change the key today? i tried snatching key from http://nginx.org/keys/nginx_signing.key and from PGP keyserver but still no luck same error appears -------------- next part -------------- An HTML attachment was scrubbed... URL: From sb at nginx.com Tue Apr 8 16:46:40 2014 From: sb at nginx.com (Sergey Budnevitch) Date: Tue, 8 Apr 2014 20:46:40 +0400 Subject: failed to verify repository GPG key In-Reply-To: References: Message-ID: <3D021613-5346-4A1F-8EC5-6377BAC7FD7D@nginx.com> On 08 Apr 2014, at 20:11, spameden wrote: > today I noticed something changed and there is no longer packages being verified from nginx.org repository: > > W: GPG error: http://nginx.org squeeze Release: The following signatures were invalid: BADSIG ABF5BD827BD9BF62 nginx signing key > > > did you guys change the key today? No. > > i tried snatching key from http://nginx.org/keys/nginx_signing.key and from PGP keyserver but still no luck same error appears Fixed. I removed a few obsolete files from repo before release, and forget to update signature on Release file in debian repository for stable branch, sorry. From nginx-forum at nginx.us Tue Apr 8 16:51:27 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 08 Apr 2014 12:51:27 -0400 Subject: [ANN] Windows nginx 1.5.13.1 Snowman In-Reply-To: References: Message-ID: <98da65786ca8c42f275121daaab307f8.NginxMailingListEnglish@forum.nginx.org> 10:30 8-4-2014 nginx 1.5.13.2 Snowman Based on nginx 1.5.13 (8-4-2014) with; + CVE fix CVE-2014-0160 + openssl-1.0.1g (upgraded 8-4-2014) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Additional specifications are like 0:18 5-4-2014 nginx 1.5.13.1 Snowman Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249040,249119#msg-249119 From nginx-forum at nginx.us Tue Apr 8 16:52:24 2014 From: nginx-forum at nginx.us (Callumpy) Date: Tue, 08 Apr 2014 12:52:24 -0400 Subject: Case insensitive location In-Reply-To: References: Message-ID: Thank you for the reply. As far as I know, everything is fine, but it's still not working for me. When I disable caching, which includes pngs, and then use 'location ~* ^/card', I am able to access the images normally without capital letters, but still not with capitals. With caching enabled, I cannot access images at all with the 'location ~* ^/card' setup. Caching is the main reason for me needing '^~' on my location block, so that I can disable caching for these images alone. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249105,249120#msg-249120 From spameden at gmail.com Tue Apr 8 17:05:41 2014 From: spameden at gmail.com (spameden) Date: Tue, 8 Apr 2014 21:05:41 +0400 Subject: failed to verify repository GPG key In-Reply-To: <3D021613-5346-4A1F-8EC5-6377BAC7FD7D@nginx.com> References: <3D021613-5346-4A1F-8EC5-6377BAC7FD7D@nginx.com> Message-ID: 2014-04-08 20:46 GMT+04:00 Sergey Budnevitch : > > On 08 Apr 2014, at 20:11, spameden wrote: > > > today I noticed something changed and there is no longer packages being > verified from nginx.org repository: > > > > W: GPG error: http://nginx.org squeeze Release: The following > signatures were invalid: BADSIG ABF5BD827BD9BF62 nginx signing key < > signing-key at nginx.com> > > > > > > did you guys change the key today? > > No. > > > > > i tried snatching key from http://nginx.org/keys/nginx_signing.key and > from PGP keyserver but still no luck same error appears > > Fixed. I removed a few obsolete files from repo before release, and forget > to update signature on Release file in debian > repository for stable branch, sorry. > Thanks, it's fine now. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Apr 8 17:17:04 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 8 Apr 2014 19:17:04 +0200 Subject: Case insensitive location In-Reply-To: References: Message-ID: I do not know about caching nor do I take it into consideration as it is not part of the problem as stated. You stated that the location regex was the problem and you seem to have taken the proper checks to verify the proper location is being used, hence I assume the location regex is not the problem. I see another regex in the location block, in the rewrite directive, which does not seem to be prepared to face different sensitivities of the case. I guess you should follow that lead... Please check your rewrite regex against case-sensitivity or, since the regex match is already done at location level and you do not need another one, maybe consider using the returndirective instead. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 8 21:41:17 2014 From: nginx-forum at nginx.us (abstein2) Date: Tue, 08 Apr 2014 17:41:17 -0400 Subject: Upstream Keepalive Questions In-Reply-To: <20140408092611.GV34696@mdounin.ru> References: <20140408092611.GV34696@mdounin.ru> Message-ID: <7c78ffe8dca55c0cfb877ef643f9eab3.NginxMailingListEnglish@forum.nginx.org> Maxim, Thanks so much for clarifying. Just to make sure I'm understanding correctly, if I had something like this pseudo-code upstream upstream1 { } upstream upstream2 { } upstream upstream3 { } upstream upstream4 { } upstream upstream5 { } server { server_name server1.com; proxy_pass http://upstream1; } server { server_name server2.com; proxy_pass http://upstream2; } server { server_name server3.com; proxy_pass http://upstream3; } server { server_name server4.com; proxy_pass http://upstream4; } server { server_name server5.com; proxy_pass http://upstream5; } There would only be performance degradation if the setup was: server { server_name server6.com; set $PROXY_TO 'upstream5'; proxy_pass http://$PROXY_TO; } Is that correct? And, if there was degradation, would it be limited to hosts that server block was trying to serve or would it impact overall performance? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249089,249130#msg-249130 From nginx-forum at nginx.us Tue Apr 8 22:18:52 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 08 Apr 2014 18:18:52 -0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> Guide to Nginx + SSL + SPDY has been updated with some infos, links and tests regarding heartbleed https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#heartbleed regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249102,249131#msg-249131 From sherlockhugo at gmail.com Tue Apr 8 22:22:40 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Tue, 8 Apr 2014 17:22:40 -0500 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: I see this on Linux-Plug list: "If your operating system uses OpenSSL 1.0.1 its servers are vulnerable. CentOS and pulled his patch here: http://lists.centos.org/pipermail/centos-announce/2014-April/020249.html If you have servers with Ubuntu 12.04 LTS you should take a look at this discussion: http://serverfault.com/questions/587574/apache-2-is-still-vulnerable-to-heartbleed-after-update-reboot" 2014-04-08 17:18 GMT-05:00 mex : > Guide to Nginx + SSL + SPDY has been updated with some infos, links and > tests > regarding heartbleed > > https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#heartbleed > > > > > regards, > > mex > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249102,249131#msg-249131 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Apr 8 22:34:37 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 9 Apr 2014 00:34:37 +0200 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is not much to learn from a gujide that what has alrfeady been said on http://heartbleed.com. More specifically, there is nothing related to nginx nor SPDY... Heartbleed is related to OpenSSL and the solution is either to update your OpenSSL version/package with the official/your distro repository one and restart OpenSSL-related processes to load the fixed library or recompile your current library removing the support of the heartbeat capability. On a side note, people having no use of heartbeat and having a fixed version orf OpenSSL are advised to disable this capability to make the trackdown of vulnerable versions easier in the near future in the eventuality of a coordinated action to detect them and alert administrators responsible for the affected systems. *I suspect your are attempting to bring traffic to your website at all costs with a pointless and misleading article... I despise that.* --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx+phil at spodhuis.org Wed Apr 9 02:11:31 2014 From: nginx+phil at spodhuis.org (Phil Pennock) Date: Tue, 8 Apr 2014 22:11:31 -0400 Subject: New module: Nginx OpenSSL version check Message-ID: <20140409021131.GA73831@redoubt.spodhuis.org> On behalf of my employer, Apcera Inc, we are delighted to make available a new Nginx module, providing for a start-up OpenSSL version check for those who wish for a little more belt&braces protection. https://github.com/apcera/nginx-openssl-version The README.md file explains the rationale. The simplest configuration is to make no configuration change, so that you just get a log message to the error log at notice level, at start-up, stating which version of OpenSSL the code was built against and which was found at runtime. The most complicated configuration is to add one line to your configuration in the global section: openssl_version_minimum 1.0.1g; With this, if the runtime library loaded in is not at least of this level, then there is a fatal configuration error and nginx will refuse to start. Dedicated to all those who have ever had to debug interactions between setcap for net-bind privilege marked on a binary, the runtime linker, concepts of what is or is not setuid and what is or is not safe in such a situation and finding that not even the runtime linker will tell you honestly which version of the library will _actually_ be used, only lsof(8) will, by showing which file was _actually_ mmap'd into your address space. Like many others, my Monday night was _fun_. Regards, and may you sleep more soundly, -Phil Pennock, Apcera Inc. From nginx-forum at nginx.us Wed Apr 9 02:25:14 2014 From: nginx-forum at nginx.us (cybermass) Date: Tue, 08 Apr 2014 22:25:14 -0400 Subject: reverse proxy dns configuration Message-ID: <5a3df3c6212ab6edf2fb0851d9849f63.NginxMailingListEnglish@forum.nginx.org> Hi. I am looking to deploy nginx on a proxy server. The real backend is located in a remoate geographic location and has apache, mysql, dovecot and postfix services supporting an IMAP mail-server. I am a bit confused on how DNS is to be setup on both the proxy and the backend server. I am assuming that I define the nameservers in the registrar for the proxies (I will have two; one as a failover). I am also assuming that on the proxy itself, bind is installed of course and the MX is defined as mail.example.com for example. I am aware that in the nginx.conf, I define the mail directive as follows: `mail { server_name mail.example.com; ... }` Basically, I read that on the physical proxy box, the hostname can be ns1.example.com (as long as it matches what is defined in the registrar), but having listed mail.example.com in nginx.com means that it will be our single point for reference for all our users looking to authenticate and use the IMAP service. If I am correct so far, I would then like to know how bind is to be configured on the real backend. Should the hostname also be mail.example.com? I would assume that it couldn't hurt since the world is not aware of this backend machine so pinging by hostname would not resolve THIS box but rather the proxy box. So I am assuming the authoritative DNS config (including the reverse DNS) is to be setup entirely on the proxy box, leaving the real backend as just a hostname and using proxy_pass http://physical.ip.address. Any help would be appreciated. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249135,249135#msg-249135 From mgg at giagnocavo.net Wed Apr 9 06:20:42 2014 From: mgg at giagnocavo.net (Michael Giagnocavo) Date: Wed, 9 Apr 2014 06:20:42 +0000 Subject: SSL_write failed (32: Broken pipe) while processing SPDY if cache on Message-ID: <686db0e3a7da473dbd9c6f9a0385ffeb@CO1PR07MB331.namprd07.prod.outlook.com> We're using nginx 1.5.12 and 1.5.13 on Ubuntu 12.04 LTS via Azure VM. Since last night, on both FireFox and Chrome, Windows and OSX, we're having difficulty with SPDY. If the browser cache is warm and proxy_cache is enabled, we see errors like this: 2014/04/09 05:26:29 [info] 17859#0: *6 SSL_write() failed (SSL:) (32: Broken pipe) while processing SPDY, client: my.ip, server: 0.0.0.0:443 On the client side, for instance in Chrome, we see: t=1397021193179 [st= 386] SPDY_STREAM_ERROR --> description = "ABANDONED (stream_id=13): https://bringyourbaygame.com/scripts/vendor/require.js" --> status = -100 --> stream_id = 13 When this happens the request will be listed as Aborted (Firebug) or ERR_EMPTY_RESPONSE (Chrome) and the SSL_write info line is logged on nginx. No errors are logged. The site never finishes fully loading. Here's more of the Chrome net-internals: http://pastebin.com/gaGxZGBW If the browser cache is cleared manually or disabled, the problem goes away. With proxy_cache off, the problem goes away. The cache config is: proxy_cache_key $scheme$proxy_host$host$uri$is_args$args; proxy_temp_path /mnt/proxy_temp 1 2; proxy_cache_path /mnt/proxy_cache levels=1:2 keys_zone=czone:256m; proxy_cache_valid any 20s; proxy_cache_valid 200 5m; proxy_cache czone; Here is debug output from nginx: http://pastebin.com/tnm6PaL6 I'm thinking perhaps there is a race condition and the lack of caching fixes it by adding some latency? If SPDY or caching is disabled, everything works fine. Things that don't help: disabling SSLv3, disabling gzip, ssl cache on/off, spdy_headers_comp 0/5, removing SNI (delete all but one server block). Yes, we updated OpenSSL and our certificates, but we tried with old certificates and the problem persists. Strangely, proxy_buffering off doesn't help (I thought since it disabled the cache, it'd have the same end effect). Site is just reverse proxying; no local resources (there is a Lua script that is not hit, but we removed that and it made no difference). How can I further debug this? Hopefully, -Michael From nginx-forum at nginx.us Wed Apr 9 06:45:35 2014 From: nginx-forum at nginx.us (honwel) Date: Wed, 09 Apr 2014 02:45:35 -0400 Subject: Help, make a post subrequest with response body from parent Message-ID: <731f9e352eee78fe638014bbbc627b18.NginxMailingListEnglish@forum.nginx.org> hi, angentzh I use echo module(angentzh) to issue a subrequest(POST method) that it's body from parent's request body, and i add some code in ngx_http_echo_subrequest->ngx_http_echo_parse_subrequest_spec : so, it's not work, but change to '-b' or '-f', it's OK. help ? ........................ if (ngx_strncmp("-h", arg->data, arg->len) == 0) { // add '-h' param 275 to_write = &h_str; 276 expecting_opt = 0; 277 continue; 278 } 279 } 280 281 ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, 282 "unknown option for echo_subrequest_async: %V", arg); 283 284 return NGX_ERROR; 285 } 286 287 if (h_str != NULL && h_str->len) { 288 parsed_sr->query_string = &r->args; 289 if (r->request_body == NULL) { 290 return NGX_ERROR; 291 } 292 293 rb = r->request_body; // make post body from parent's body, is right? // i track this line by gdb, the r->request_body->buf is null, and r->request_body->bufs aslo is null. 294 295 } else if (body_str != NULL && body_str->len) { 296 rb = ngx_pcalloc(r->pool, sizeof(ngx_http_request_body_t)); 297 298 if (rb == NULL) { Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249140,249140#msg-249140 From vbart at nginx.com Wed Apr 9 07:29:31 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 09 Apr 2014 11:29:31 +0400 Subject: SSL_write failed (32: Broken pipe) while processing SPDY if cache on In-Reply-To: <686db0e3a7da473dbd9c6f9a0385ffeb@CO1PR07MB331.namprd07.prod.outlook.com> References: <686db0e3a7da473dbd9c6f9a0385ffeb@CO1PR07MB331.namprd07.prod.outlook.com> Message-ID: <1925078.nxLANXLgLj@vbart-laptop> On Wednesday 09 April 2014 06:20:42 Michael Giagnocavo wrote: > We're using nginx 1.5.12 and 1.5.13 on Ubuntu 12.04 LTS via Azure VM. > > Since last night, on both FireFox and Chrome, Windows and OSX, we're having difficulty with SPDY. If the browser cache is warm and proxy_cache is enabled, we see errors like this: > > 2014/04/09 05:26:29 [info] 17859#0: *6 SSL_write() failed (SSL:) (32: Broken pipe) while processing SPDY, client: my.ip, server: 0.0.0.0:443 > > On the client side, for instance in Chrome, we see: > t=1397021193179 [st= 386] SPDY_STREAM_ERROR > --> description = "ABANDONED (stream_id=13): https://bringyourbaygame.com/scripts/vendor/require.js" > --> status = -100 > --> stream_id = 13 > > When this happens the request will be listed as Aborted (Firebug) or ERR_EMPTY_RESPONSE (Chrome) and the SSL_write info line is logged on nginx. No errors are logged. The site never finishes fully loading. Here's more of the Chrome net-internals: http://pastebin.com/gaGxZGBW > > If the browser cache is cleared manually or disabled, the problem goes away. With proxy_cache off, the problem goes away. The cache config is: > > proxy_cache_key $scheme$proxy_host$host$uri$is_args$args; > proxy_temp_path /mnt/proxy_temp 1 2; > proxy_cache_path /mnt/proxy_cache levels=1:2 keys_zone=czone:256m; > proxy_cache_valid any 20s; > proxy_cache_valid 200 5m; > proxy_cache czone; > > Here is debug output from nginx: http://pastebin.com/tnm6PaL6 > > I'm thinking perhaps there is a race condition and the lack of caching fixes it by adding some latency? > > If SPDY or caching is disabled, everything works fine. Things that don't help: disabling SSLv3, disabling gzip, ssl cache on/off, spdy_headers_comp 0/5, removing SNI (delete all but one server block). Yes, we updated OpenSSL and our certificates, but we tried with old certificates and the problem persists. Strangely, proxy_buffering off doesn't help (I thought since it disabled the cache, it'd have the same end effect). > > Site is just reverse proxying; no local resources (there is a Lua script that is not hit, but we removed that and it made no difference). > > How can I further debug this? > You've likely encountered this bug: http://trac.nginx.org/nginx/ticket/428 To confirm this, please try the patch from the ticket. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Apr 9 08:19:34 2014 From: nginx-forum at nginx.us (kay) Date: Wed, 09 Apr 2014 04:19:34 -0400 Subject: Dynamic upstream proxy_pass Message-ID: I'm trying to set upstream names by variables, but nginx recognizes variables as hostnames, not upstream names. For example: map $cookie_backend $proxy_host { default 'backend1'; '1' 'backend2'; } ... ... ... upstream backend1 { server backend123:8080; server backend124:8080; } ... ... ... upstream backend2 { server backend223:8080; server backend224:8080; } ... ... ... location / { proxy_pass http://$proxy_host; } nginx returns error message: 2014/04/09 14:19:51 [error] 1085#0: *1128620 backend1 could not be resolved (3: Host not found) while sending to client, client: 192.168.1.145, server: localhost, request: "GET / HTTP/1.1", host: "localhost" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249142,249142#msg-249142 From makailol7 at gmail.com Wed Apr 9 08:29:35 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Wed, 9 Apr 2014 13:59:35 +0530 Subject: Dynamic upstream proxy_pass In-Reply-To: References: Message-ID: Hello! You need to use resolver directive in Nginx. Also you need to set DNS entries for your backend hostname. Best regards, Makailol On Wed, Apr 9, 2014 at 1:49 PM, kay wrote: > I'm trying to set upstream names by variables, but nginx recognizes > variables as hostnames, not upstream names. > > For example: > map $cookie_backend $proxy_host { > default 'backend1'; > '1' 'backend2'; > } > ... ... ... > upstream backend1 { > server backend123:8080; > server backend124:8080; > } > ... ... ... > upstream backend2 { > server backend223:8080; > server backend224:8080; > } > ... ... ... > location / { > proxy_pass http://$proxy_host; > } > > nginx returns error message: > 2014/04/09 14:19:51 [error] 1085#0: *1128620 backend1 could not be resolved > (3: Host not found) while sending to client, client: 192.168.1.145, server: > localhost, request: "GET / HTTP/1.1", host: "localhost" > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249142,249142#msg-249142 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 9 08:55:42 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Wed, 09 Apr 2014 04:55:42 -0400 Subject: Memory Pool Message-ID: <7e5103d728917dd25b72b5c762f22d21.NginxMailingListEnglish@forum.nginx.org> Nginx when it accepts a connection, it creates a memory pool for that connection (allocating from heap). After which further memory requirement for that connection will be allocated from that pool. This is good. But, why don't we pre create the memory pools depending upon the number of connections and use that pool. In the current approach if some connections are coming up going down., we will be allocating and freeing to heap frequently. Can someone please clarify why this has been done like this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249144,249144#msg-249144 From nginx-forum at nginx.us Wed Apr 9 09:03:49 2014 From: nginx-forum at nginx.us (kay) Date: Wed, 09 Apr 2014 05:03:49 -0400 Subject: Dynamic upstream proxy_pass In-Reply-To: References: Message-ID: <39966767b2f1815e0a090ffc3f512f65.NginxMailingListEnglish@forum.nginx.org> My bad, in one location I forgot to remove port, that is why nginx tried to resolve upstream as hostname. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249142,249145#msg-249145 From nginx-forum at nginx.us Wed Apr 9 09:12:06 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Wed, 09 Apr 2014 05:12:06 -0400 Subject: SMNP monitoring support In-Reply-To: References: Message-ID: <5615536ef89866647eb222a897dc73f9.NginxMailingListEnglish@forum.nginx.org> This module is just to monitor the status page. Is there any SNMP module which can generate some snmp alarms when certain threshold exceeds or when there is a crash? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248344,249146#msg-249146 From maxim at nginx.com Wed Apr 9 09:22:38 2014 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 09 Apr 2014 13:22:38 +0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> <4301f3bb20815f58448afd03e30aebc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5345115E.2020408@nginx.com> On 4/9/14 2:18 AM, mex wrote: > Guide to Nginx + SSL + SPDY has been updated with some infos, links and > tests > regarding heartbleed > > https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/#heartbleed > Also it's worth to look at the recent nginx blog post regarding heartbleed: http://nginx.com/blog/nginx-and-the-heartbleed-vulnerability/ -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Wed Apr 9 09:45:52 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Wed, 09 Apr 2014 05:45:52 -0400 Subject: Memory Pool In-Reply-To: <7e5103d728917dd25b72b5c762f22d21.NginxMailingListEnglish@forum.nginx.org> References: <7e5103d728917dd25b72b5c762f22d21.NginxMailingListEnglish@forum.nginx.org> Message-ID: Also I noticed that initially for a connection, it allocates a pool of size 256 and if that exceeds, it goes and calls ngx_palloc_large which in turn calls malloc. So, can we not allocate more in the first attempt. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249144,249149#msg-249149 From vbart at nginx.com Wed Apr 9 10:24:35 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 09 Apr 2014 14:24:35 +0400 Subject: Memory Pool In-Reply-To: <7e5103d728917dd25b72b5c762f22d21.NginxMailingListEnglish@forum.nginx.org> References: <7e5103d728917dd25b72b5c762f22d21.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5216276.2lzYmrcII5@vbart-laptop> On Wednesday 09 April 2014 04:55:42 nginxsantos wrote: > Nginx when it accepts a connection, it creates a memory pool for that > connection (allocating from heap). After which further memory requirement > for that connection will be allocated from that pool. This is good. > But, why don't we pre create the memory pools depending upon the number of > connections and use that pool. In the current approach if some connections > are coming up going down., we will be allocating and freeing to heap > frequently. > > Can someone please clarify why this has been done like this? > System allocators are usually smart enough to not transform every malloc() into syscall. One of the main benefits provided by these pools is convenient memory management for C program that allows to not care much about corresponding free() calls and memory leaks. So usually every pool is attached to some object with a clear life cycle, like a request or a connection. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Apr 9 11:18:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Apr 2014 15:18:04 +0400 Subject: Upstream Keepalive Questions In-Reply-To: <7c78ffe8dca55c0cfb877ef643f9eab3.NginxMailingListEnglish@forum.nginx.org> References: <20140408092611.GV34696@mdounin.ru> <7c78ffe8dca55c0cfb877ef643f9eab3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140409111804.GJ34696@mdounin.ru> Hello! On Tue, Apr 08, 2014 at 05:41:17PM -0400, abstein2 wrote: > Maxim, > > Thanks so much for clarifying. Just to make sure I'm understanding > correctly, if I had something like this pseudo-code > > upstream upstream1 { } > upstream upstream2 { } > upstream upstream3 { } > upstream upstream4 { } > upstream upstream5 { } > > server { server_name server1.com; proxy_pass http://upstream1; } > server { server_name server2.com; proxy_pass http://upstream2; } > server { server_name server3.com; proxy_pass http://upstream3; } > server { server_name server4.com; proxy_pass http://upstream4; } > server { server_name server5.com; proxy_pass http://upstream5; } > > There would only be performance degradation if the setup was: > > server { > server_name server6.com; > set $PROXY_TO 'upstream5'; > proxy_pass http://$PROXY_TO; > } > > Is that correct? Yes. The "degradation" is due to upstream{} blocks are in stored an array, and looking up an upstream uses linear array traversal. That is, cost of looking an upstream sever in the "proxy_pass http://$proxy_to" is O(n), where n - number of upstream{} blocks in the configuration. I don't think the difference will be measurable even with thousands of upstream{} blocks though. > And, if there was degradation, would it be limited to hosts > that server block was trying to serve or would it impact overall > performance? See above, it is only related to upstream{} block lookup in "proxy_pass http://$proxy_to" and doesn't affect requests where this proxy_pass isn't used. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Apr 9 12:22:10 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Wed, 09 Apr 2014 08:22:10 -0400 Subject: Memory Pool In-Reply-To: <5216276.2lzYmrcII5@vbart-laptop> References: <5216276.2lzYmrcII5@vbart-laptop> Message-ID: <0e446e1eaf7d326e993c5cbfdc4a4a77.NginxMailingListEnglish@forum.nginx.org> Thank you for the reply. I know it is simple. But, will we not get more performance benefit if we create the pools before hand. Say I will create a memory pool for the connections (for example say with 4000 entries). Everytime I need one, I will go and get it from that pool and when I free it, I will free that to the pool. Will not that be more efficient rather than for every connection and request going and allocating a pool. I always feel the run time malloc calls are bad and for every connection and request are expensive when we handle thousands of connections per seconds. Please share your thoughts.... Thank you. Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249144,249159#msg-249159 From kworthington at gmail.com Wed Apr 9 12:29:44 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 9 Apr 2014 08:29:44 -0400 Subject: nginx-1.5.13 In-Reply-To: <20140408143408.GC34696@mdounin.ru> References: <20140408143408.GC34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.3 for Windows http://goo.gl/D75u6G (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Apr 8, 2014 at 10:34 AM, Maxim Dounin wrote: > Changes with nginx 1.5.13 08 Apr > 2014 > > *) Change: improved hash table handling; the default values of the > "variables_hash_max_size" and "types_hash_bucket_size" were changed > to 1024 and 64 respectively. > > *) Feature: the ngx_http_mp4_module now supports the "end" argument. > > *) Feature: byte ranges support in the ngx_http_mp4_module and while > saving responses to cache. > > *) Bugfix: alerts "ngx_slab_alloc() failed: no memory" no longer logged > when using shared memory in the "ssl_session_cache" directive and in > the ngx_http_limit_req_module. > > *) Bugfix: the "underscores_in_headers" directive did not allow > underscore as a first character of a header. > Thanks to Piotr Sikora. > > *) Bugfix: cache manager might hog CPU on exit in nginx/Windows. > > *) Bugfix: nginx/Windows terminated abnormally if the > "ssl_session_cache" directive was used with the "shared" parameter. > > *) Bugfix: in the ngx_http_spdy_module. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 9 12:31:29 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Wed, 09 Apr 2014 08:31:29 -0400 Subject: memory pool allocation Message-ID: Suppose, I am allocating a pool of greater than 4k(page size). Say for example I am calling the function ngx_create_pool with 8096. But, this function will set the max as 4095 even if it has allocated 8K. Not sure, why is it being done like this. p->max = (size < NGX_MAX_ALLOC_FROM_POOL) ? size : NGX_MAX_ALLOC_FROM_POOL; I know, I have created a pool with size 8K, now I am allocating say 4K (4096) from this pool. I will call ngx_palloc with 4096. There we check if (size <= pool->max) which in this case will not satisfy and it will go and call ngx_palloc_large which inturn will allocate 4K. This somehow is not sounding good. Why is ngx_create_pool putting a max value of page size even when it is allocating more. It is not doing chaining also. Any expert opinions??? Thanks, Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249161,249161#msg-249161 From vbart at nginx.com Wed Apr 9 14:10:06 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 09 Apr 2014 18:10:06 +0400 Subject: Memory Pool In-Reply-To: <0e446e1eaf7d326e993c5cbfdc4a4a77.NginxMailingListEnglish@forum.nginx.org> References: <5216276.2lzYmrcII5@vbart-laptop> <0e446e1eaf7d326e993c5cbfdc4a4a77.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8369028.rKTJ55MWNJ@vbart-laptop> On Wednesday 09 April 2014 08:22:10 nginxsantos wrote: > Thank you for the reply. > > I know it is simple. But, will we not get more performance benefit if we > create the pools before hand. Say I will create a memory pool for the > connections (for example say with 4000 entries). Everytime I need one, I > will go and get it from that pool and when I free it, I will free that to > the pool. Will not that be more efficient rather than for every connection > and request going and allocating a pool. > > I always feel the run time malloc calls are bad and for every connection and > request are expensive when we handle thousands of connections per seconds. > > Please share your thoughts.... > I think performance benefits will be negligible. There are lightweight web servers like Lighttpd that doesn't use memory pools and known to be pretty fast. Also note, that allocation of connection pool is just a one small allocation, that nginx does during request processing among number of allocations of various buffers. wbr, Valentin V. Bartenev From jefftk at google.com Wed Apr 9 14:51:24 2014 From: jefftk at google.com (Jeff Kaufman) Date: Wed, 9 Apr 2014 10:51:24 -0400 Subject: expected behavior of --with-debug Message-ID: In ngx_pagespeed we interpret --with-debug to mean "include debugging symbols and debug-only assertions". Is this what most people using nginx would expect? (We distribute two precompiled versions of PSOL, "debug" and "release", and we've been using --with-debug to switch between them. Now we're wondering if this is an abuse of that flag, and if we should instead add our own flag to specify which precompiled version to use.) From mdounin at mdounin.ru Wed Apr 9 15:33:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Apr 2014 19:33:18 +0400 Subject: expected behavior of --with-debug In-Reply-To: References: Message-ID: <20140409153318.GO34696@mdounin.ru> Hello! On Wed, Apr 09, 2014 at 10:51:24AM -0400, Jeff Kaufman wrote: > In ngx_pagespeed we interpret --with-debug to mean "include debugging > symbols and debug-only assertions". Is this what most people using > nginx would expect? > > (We distribute two precompiled versions of PSOL, "debug" and > "release", and we've been using --with-debug to switch between them. > Now we're wondering if this is an abuse of that flag, and if we should > instead add our own flag to specify which precompiled version to use.) The --with-debug in nginx means "enable debug logging", which includes logging itself and in some cases some additional cleanup/tests to simplify debugging (e.g., ngx_queue_remove() will set prev/next pointers to NULL on the element it removes from queue). Debug symbols are included by default for all supported compilers regardless of configure options. If desired, they can be stripped later using strip(1). Assertions as in assert(3) are not used in nginx. In the event-based server it's not a good idea to call abort(), as it will affect multiple connections handled in the same process. If needed for debugging, there is ngx_debug_point() which can be controlled using the debug_points directive, see http://nginx.org/r/debug_points. -- Maxim Dounin http://nginx.org/ From mgg at giagnocavo.net Wed Apr 9 16:32:15 2014 From: mgg at giagnocavo.net (Michael Giagnocavo) Date: Wed, 9 Apr 2014 16:32:15 +0000 Subject: SSL_write failed (32: Broken pipe) while processing SPDY if cache on In-Reply-To: <1925078.nxLANXLgLj@vbart-laptop> References: <686db0e3a7da473dbd9c6f9a0385ffeb@CO1PR07MB331.namprd07.prod.outlook.com> <1925078.nxLANXLgLj@vbart-laptop> Message-ID: <81db9ac066354c1fb006fd3dade0e850@CO1PR07MB331.namprd07.prod.outlook.com> Thanks a ton, that worked. I read a few bug reports but didn't see that one. I'm not sure how our config worked previously since we've been using SPDY and proxy_cache for a month :\. Sincerely, -Michael -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Valentin V. Bartenev Sent: Wednesday, April 9, 2014 1:30 AM To: nginx at nginx.org Subject: Re: SSL_write failed (32: Broken pipe) while processing SPDY if cache on On Wednesday 09 April 2014 06:20:42 Michael Giagnocavo wrote: > We're using nginx 1.5.12 and 1.5.13 on Ubuntu 12.04 LTS via Azure VM. > > Since last night, on both FireFox and Chrome, Windows and OSX, we're having difficulty with SPDY. If the browser cache is warm and proxy_cache is enabled, we see errors like this: > > 2014/04/09 05:26:29 [info] 17859#0: *6 SSL_write() failed (SSL:) (32: Broken pipe) while processing SPDY, client: my.ip, server: 0.0.0.0:443 > > On the client side, for instance in Chrome, we see: > t=1397021193179 [st= 386] SPDY_STREAM_ERROR > --> description = "ABANDONED (stream_id=13): https://bringyourbaygame.com/scripts/vendor/require.js" > --> status = -100 > --> stream_id = 13 > > When this happens the request will be listed as Aborted (Firebug) or ERR_EMPTY_RESPONSE (Chrome) and the SSL_write info line is logged on nginx. No errors are logged. The site never finishes fully loading. Here's more of the Chrome net-internals: http://pastebin.com/gaGxZGBW > > If the browser cache is cleared manually or disabled, the problem goes away. With proxy_cache off, the problem goes away. The cache config is: > > proxy_cache_key $scheme$proxy_host$host$uri$is_args$args; > proxy_temp_path /mnt/proxy_temp 1 2; > proxy_cache_path /mnt/proxy_cache levels=1:2 keys_zone=czone:256m; > proxy_cache_valid any 20s; > proxy_cache_valid 200 5m; > proxy_cache czone; > > Here is debug output from nginx: http://pastebin.com/tnm6PaL6 > > I'm thinking perhaps there is a race condition and the lack of caching fixes it by adding some latency? > > If SPDY or caching is disabled, everything works fine. Things that don't help: disabling SSLv3, disabling gzip, ssl cache on/off, spdy_headers_comp 0/5, removing SNI (delete all but one server block). Yes, we updated OpenSSL and our certificates, but we tried with old certificates and the problem persists. Strangely, proxy_buffering off doesn't help (I thought since it disabled the cache, it'd have the same end effect). > > Site is just reverse proxying; no local resources (there is a Lua script that is not hit, but we removed that and it made no difference). > > How can I further debug this? > You've likely encountered this bug: http://trac.nginx.org/nginx/ticket/428 To confirm this, please try the patch from the ticket. wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Apr 9 18:47:17 2014 From: nginx-forum at nginx.us (mex) Date: Wed, 09 Apr 2014 14:47:17 -0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <5345115E.2020408@nginx.com> References: <5345115E.2020408@nginx.com> Message-ID: <1aff61e13bfebacbcf944fcccd67f59a.NginxMailingListEnglish@forum.nginx.org> > Also it's worth to look at the recent nginx blog post regarding > heartbleed: > > http://nginx.com/blog/nginx-and-the-heartbleed-vulnerability/ > thanx for the link maxim, has been incorporated regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249102,249178#msg-249178 From lists at ruby-forum.com Wed Apr 9 22:06:34 2014 From: lists at ruby-forum.com (Shawn Za) Date: Thu, 10 Apr 2014 00:06:34 +0200 Subject: mysql backend Message-ID: Hi. If nginx ties back to a mysql backend on a remote server for IMAP/SMTP, don't I need to add $protocol_ports->{'smtp'}=25; below the $protocol_ports->{'imap'}=143; ? Also I should not be listening to 993 on my actual backend but rather 143 and just enforce TLS and make sure the ssl protocols are used. Im combining both of these: http://wiki.nginx.org/ImapProxyExample http://wiki.nginx.org/ImapAuthenticateWithEmbeddedPerlScript -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Apr 10 06:28:18 2014 From: nginx-forum at nginx.us (honwel) Date: Thu, 10 Apr 2014 02:28:18 -0400 Subject: Parallel subrequests In-Reply-To: <20131226154704.GG95113@mdounin.ru> References: <20131226154704.GG95113@mdounin.ru> Message-ID: how to write a filter module after the postpone filter . if i change module's config file or complie file (auto/) ? any example? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245707,249183#msg-249183 From nginx-forum at nginx.us Thu Apr 10 06:45:56 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Thu, 10 Apr 2014 02:45:56 -0400 Subject: memory pool allocation In-Reply-To: References: Message-ID: Any expert opinions??? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249161,249185#msg-249185 From ru at nginx.com Thu Apr 10 07:25:48 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 10 Apr 2014 11:25:48 +0400 Subject: memory pool allocation In-Reply-To: References: Message-ID: <20140410072548.GM12354@lo0.su> On Wed, Apr 09, 2014 at 08:31:29AM -0400, nginxsantos wrote: > Suppose, I am allocating a pool of greater than 4k(page size). Say for > example I am calling the function ngx_create_pool with 8096. > But, this function will set the max as 4095 even if it has allocated 8K. Not > sure, why is it being done like this. > > > p->max = (size < NGX_MAX_ALLOC_FROM_POOL) ? size : > NGX_MAX_ALLOC_FROM_POOL; > > > I know, I have created a pool with size 8K, now I am allocating say 4K > (4096) from this pool. I will call ngx_palloc with 4096. There we check if > (size <= pool->max) which in this case will not satisfy and it will go and > call ngx_palloc_large which inturn will allocate 4K. > > This somehow is not sounding good. Why is ngx_create_pool putting a max > value of page size even when it is allocating more. It is not doing chaining > also. > > Any expert opinions??? > > Thanks, Santos Hint: allocations not exceeding pool->max are not freed by ngx_pfree() until the pool is destroyed. From nginx-forum at nginx.us Thu Apr 10 08:32:42 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Thu, 10 Apr 2014 04:32:42 -0400 Subject: memory pool allocation In-Reply-To: <20140410072548.GM12354@lo0.su> References: <20140410072548.GM12354@lo0.su> Message-ID: <3edfc3cde42ade086beac22dd7e6bdd6.NginxMailingListEnglish@forum.nginx.org> Thank you. But, my question is when we are allocating a pool of more than one page size why are we putting the max value as one page size and then further leading to memory allocation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249161,249189#msg-249189 From vbart at nginx.com Thu Apr 10 08:35:48 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 10 Apr 2014 12:35:48 +0400 Subject: memory pool allocation In-Reply-To: <3edfc3cde42ade086beac22dd7e6bdd6.NginxMailingListEnglish@forum.nginx.org> References: <20140410072548.GM12354@lo0.su> <3edfc3cde42ade086beac22dd7e6bdd6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3080710.nFjqeEJNnD@vbart-laptop> On Thursday 10 April 2014 04:32:42 nginxsantos wrote: > Thank you. > But, my question is when we are allocating a pool of more than one page size > why are we putting the max value as one page size and then further leading > to memory allocation. > Because there are no advantages in allocating big objects from pool's memory. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Apr 11 11:59:42 2014 From: nginx-forum at nginx.us (Callumpy) Date: Fri, 11 Apr 2014 07:59:42 -0400 Subject: Case insensitive location In-Reply-To: References: Message-ID: <125b7192a8d86f12814f068f295c7285.NginxMailingListEnglish@forum.nginx.org> I'm still having no luck with it. As I said before, when I use location ~* ^/card/, it just 404s all the time unless I disable my cache. Here is my cache code, I have no idea why it does this. # Cache setup. location ~* \.(jpg|jpeg|png|gif|ico|css|xml|js|woff)$ { expires 30d; root /home/site/public; add_header Pragma public; add_header Cache-Control "public, must-revalidate, proxy-revalidate"; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249105,249224#msg-249224 From nginx-forum at nginx.us Fri Apr 11 13:15:15 2014 From: nginx-forum at nginx.us (mex) Date: Fri, 11 Apr 2014 09:15:15 -0400 Subject: Case insensitive location In-Reply-To: <52e56327b95b474e6fa81f409acfd862.NginxMailingListEnglish@forum.nginx.org> References: <52e56327b95b474e6fa81f409acfd862.NginxMailingListEnglish@forum.nginx.org> Message-ID: <30a9f5d493147a804e36f93dfe6284bc.NginxMailingListEnglish@forum.nginx.org> do you have tryfiles enabled? i'd try this to check, if the request reaches the nright location-block location ^~ /card/ { ... access_log /var/log/nginx/cards.log combined; ... } if so, your must look inside your locatiuon, if not, somwhere else regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249105,249226#msg-249226 From vbart at nginx.com Fri Apr 11 13:22:47 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 11 Apr 2014 17:22:47 +0400 Subject: Case insensitive location In-Reply-To: <125b7192a8d86f12814f068f295c7285.NginxMailingListEnglish@forum.nginx.org> References: <125b7192a8d86f12814f068f295c7285.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1688953.il5Gcarvxv@vbart-laptop> On Friday 11 April 2014 07:59:42 Callumpy wrote: > I'm still having no luck with it. > > As I said before, when I use location ~* ^/card/, it just 404s all the time > unless I disable my cache. > > Here is my cache code, I have no idea why it does this. > > # Cache setup. > location ~* \.(jpg|jpeg|png|gif|ico|css|xml|js|woff)$ { > expires 30d; > root /home/site/public; > add_header Pragma public; > add_header Cache-Control "public, must-revalidate, > proxy-revalidate"; > } > A quote from the documentation (http://nginx.org/r/location): "Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used." Let me guess, your "Cache setup" comes first? wbr, Valentin V. Bartenev From vbart at nginx.com Fri Apr 11 16:11:14 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 11 Apr 2014 20:11:14 +0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1670864.tNYhlBjsa4@vbart-laptop> "Answering the Critical Question: Can You Get Private SSL Keys Using Heartbleed?" @ http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed wbr, Valentin V. Bartenev From jim at ohlste.in Fri Apr 11 16:34:51 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 11 Apr 2014 12:34:51 -0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <1670864.tNYhlBjsa4@vbart-laptop> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> <1670864.tNYhlBjsa4@vbart-laptop> Message-ID: <534819AB.5040102@ohlste.in> Hello, On 4/11/14, 12:11 PM, Valentin V. Bartenev wrote: > "Answering the Critical Question: Can You Get Private SSL Keys Using Heartbleed?" > @ http://blog.cloudflare.com/answering-the-critical-question-can-you-get-private-ssl-keys-using-heartbleed > Thanks for the link. On a quick read it seems their conclusion is that while it is *extremely* unlikely that your private key(s) was/were stolen using nginx, you should still re-key and revoke. While comforting, not really of any great practical help. Nice that CloudFlare (and no doubt others) received significant advance warning while the rest of us were left vulnerable. Just sayin... -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From e1c1bac6253dc54a1e89ddc046585792 at posteo.net Fri Apr 11 16:40:07 2014 From: e1c1bac6253dc54a1e89ddc046585792 at posteo.net (Philipp) Date: Fri, 11 Apr 2014 18:40:07 +0200 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <534819AB.5040102@ohlste.in> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org> <1670864.tNYhlBjsa4@vbart-laptop> <534819AB.5040102@ohlste.in> Message-ID: <9ca11555a75c0dc46d08a05465b22247@posteo.de> Am 11.04.2014 18:34 schrieb Jim Ohlstein: > Thanks for the link. On a quick read it seems their conclusion is > that while it is *extremely* unlikely that your private key(s) > was/were stolen using nginx, you should still re-key and revoke. While > comforting, not really of any great practical help. Adding info from http://arstechnica.com/security/2014/04/heartbleed-vulnerability-may-have-been-exploited-months-before-patch/ it looks like for tests so far only freebsd/apache2 is a combo where private key data could leak. > Nice that CloudFlare (and no doubt others) received significant > advance warning while the rest of us were left vulnerable. Just > sayin... Really.. those with deep pockets get warning "in advance". Blah. From nginx-forum at nginx.us Fri Apr 11 16:45:17 2014 From: nginx-forum at nginx.us (justink101) Date: Fri, 11 Apr 2014 12:45:17 -0400 Subject: Requests being blocked client-side Message-ID: <2836cfe5738681164ed4d1ff7a42533f.NginxMailingListEnglish@forum.nginx.org> I am seeing super strange behavior and I am absolutely stumped. If I open up two tabs in Google Chrome (34), and in the first refresh our application (foo.ourapp.com), which makes an ajax requests (via jQuery) that takes 20 or so seconds to complete. Then in the other new tab hit refresh on (foo.ourapp.com), the second tab blocks waiting until the ajax request on the first tab finishes. Inspecting Chrome developer tools shows: http://cl.ly/image/1s3V353o1v2c However, if I open up Firefox and load up (foo.ourapp.com) while the long running ajax request is firing in Chrome, it loads fine. Thus, I have determined that the client (Chrome) is blocking requests client-side. However, according to http://www.browserscope.org/?category=network it says Chrome supports 6 connections per hostname, which should be fine, as I am only making two requests. Any ideas on this? Pretty standard nginx config, with the following notable exceptions: listen 443 deferred ssl spdy; add_header Strict-Transport-Security max-age=31556926; add_header X-XSS-Protection "1; mode=block"; add_header Access-Control-Allow-Origin $scheme://$account.myapp.com; add_header X-Frame-Options DENY; add_header X-Content-Type-Options nosniff; Running nginx/1.5.12 with SPDY 3.1. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249233,249233#msg-249233 From mdounin at mdounin.ru Fri Apr 11 18:40:41 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Apr 2014 22:40:41 +0400 Subject: Requests being blocked client-side In-Reply-To: <2836cfe5738681164ed4d1ff7a42533f.NginxMailingListEnglish@forum.nginx.org> References: <2836cfe5738681164ed4d1ff7a42533f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140411184041.GS34696@mdounin.ru> Hello! On Fri, Apr 11, 2014 at 12:45:17PM -0400, justink101 wrote: > I am seeing super strange behavior and I am absolutely stumped. If I open up > two tabs in Google Chrome (34), and in the first refresh our application > (foo.ourapp.com), which makes an ajax requests (via jQuery) that takes 20 or > so seconds to complete. Then in the other new tab hit refresh on > (foo.ourapp.com), the second tab blocks waiting until the ajax request on > the first tab finishes. Inspecting Chrome developer tools shows: > > http://cl.ly/image/1s3V353o1v2c > > However, if I open up Firefox and load up (foo.ourapp.com) while the long > running ajax request is firing in Chrome, it loads fine. Thus, I have > determined that the client (Chrome) is blocking requests client-side. > However, according to http://www.browserscope.org/?category=network it says > Chrome supports 6 connections per hostname, which should be fine, as I am > only making two requests. > > Any ideas on this? Pretty standard nginx config, with the following notable > exceptions: > > listen 443 deferred ssl spdy; > > add_header Strict-Transport-Security max-age=31556926; > add_header X-XSS-Protection "1; mode=block"; > add_header Access-Control-Allow-Origin $scheme://$account.myapp.com; > add_header X-Frame-Options DENY; > add_header X-Content-Type-Options nosniff; > > Running nginx/1.5.12 with SPDY 3.1. The "6 connections per hostname" is for normal http, not spdy - as spdy specification requires no more than 1 connection per server (as spdy allows to multiplex multiple requests within a single connection). An obvious thing to check is if the problem goes away if you switch off spdy. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Apr 11 18:42:29 2014 From: nginx-forum at nginx.us (Callumpy) Date: Fri, 11 Apr 2014 14:42:29 -0400 Subject: Case insensitive location In-Reply-To: <1688953.il5Gcarvxv@vbart-laptop> References: <1688953.il5Gcarvxv@vbart-laptop> Message-ID: <0657b204143803aab4b8ab1bbd2b37cc.NginxMailingListEnglish@forum.nginx.org> Oh my, what an idiot i've been. Thank you very much for your post! I moved it up above my cache and it works just fine, here is what i've been able to shorten it down to now: location ~* /card/ { rewrite (?i)^/card/([a-zA-Z0-9_+]+)/(.*).png$ /card.php?name=$2&type=$1; expires epoch; } Thank you to everyone who posted to help me, it's greatly appreciated! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249105,249235#msg-249235 From sherlockhugo at gmail.com Fri Apr 11 18:58:44 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Fri, 11 Apr 2014 13:58:44 -0500 Subject: Using variables on nginx. Message-ID: I can Use variables to define IP:Port on upstreams directive? For example: upstream cluster-servers { $server01; $server02; } -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Apr 11 19:16:44 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 11 Apr 2014 23:16:44 +0400 Subject: Using variables on nginx. In-Reply-To: References: Message-ID: <4568960.x3TvTWbSdt@vbart-laptop> On Friday 11 April 2014 13:58:44 Raul Hugo wrote: > I can Use variables to define IP:Port on upstreams directive? > > For example: > > upstream cluster-servers { > $server01; > $server02; > } > No, you can't. Please also note: http://nginx.org/en/docs/faq/variables_in_config.html wbr, Valentin V. Bartenev From sherlockhugo at gmail.com Fri Apr 11 19:19:39 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Fri, 11 Apr 2014 14:19:39 -0500 Subject: Using variables on nginx. In-Reply-To: <4568960.x3TvTWbSdt@vbart-laptop> References: <4568960.x3TvTWbSdt@vbart-laptop> Message-ID: Ok thx, I test it and really doesnt work. :) 2014-04-11 14:16 GMT-05:00 Valentin V. Bartenev : > On Friday 11 April 2014 13:58:44 Raul Hugo wrote: > > I can Use variables to define IP:Port on upstreams directive? > > > > For example: > > > > upstream cluster-servers { > > $server01; > > $server02; > > } > > > > No, you can't. > > Please also note: > http://nginx.org/en/docs/faq/variables_in_config.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 11 20:48:57 2014 From: nginx-forum at nginx.us (allamm78) Date: Fri, 11 Apr 2014 16:48:57 -0400 Subject: undefined symbol: ldap_init_fd Message-ID: I successfully compile Nginx with Nginx-auth-ldap and when I start Nginx , the worker process turn defunct and I see - nginx: worker process: symbol lookup error: nginx: worker process: undefined symbol: ldap_init_fd in the error logs without able to utilize ldap, what could be wrong here? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249240,249240#msg-249240 From patrick at laimbock.com Sat Apr 12 08:08:04 2014 From: patrick at laimbock.com (Patrick Laimbock) Date: Sat, 12 Apr 2014 10:08:04 +0200 Subject: undefined symbol: ldap_init_fd In-Reply-To: References: Message-ID: <5348F464.80307@laimbock.com> On 04/11/2014 10:48 PM, allamm78 wrote: > I successfully compile Nginx with Nginx-auth-ldap and when I start Nginx , > the worker process turn defunct and I see - > > nginx: worker process: symbol lookup error: nginx: worker process: undefined > symbol: ldap_init_fd > > in the error logs without able to utilize ldap, what could be wrong here? I have never seen that error before. The revision that compiles and works fine for me with nginx 1.4.7 and openldap-2.4.39 is this one: https://github.com/kvspb/nginx-auth-ldap/tree/ee45bc4898d70770e06af9fe0a8c0088b4cb9f26 HTH, Patrick From luky-37 at hotmail.com Sat Apr 12 11:14:41 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 12 Apr 2014 13:14:41 +0200 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: <534819AB.5040102@ohlste.in> References: <06f60feec7bf6241a8b09bcc46a036ad.NginxMailingListEnglish@forum.nginx.org>, <1670864.tNYhlBjsa4@vbart-laptop>, <534819AB.5040102@ohlste.in> Message-ID: Hi, > Thanks for the link. On a quick read it seems their conclusion is that > while it is *extremely* unlikely that your private key(s) was/were > stolen using nginx, you should still re-key and revoke. While > comforting, not really of any great practical help. They updated the post, their initial analysis was wrong. Also see: http://blog.cloudflare.com/the-results-of-the-cloudflare-challenge > Nice that CloudFlare (and no doubt others) received significant advance > warning while the rest of us were left vulnerable. Just sayin... They had no choice. They couldn't notify a lot of people about this, it would have been leaked to exploit kits and black hats before OpenSSL provided the bugfix. That would have been a lot worse. Regards, Lukas From nginx-forum at nginx.us Sat Apr 12 16:17:14 2014 From: nginx-forum at nginx.us (reviyou) Date: Sat, 12 Apr 2014 12:17:14 -0400 Subject: SlowFS Cache or Proxy_Cache for GridFS In-Reply-To: <20110223155644.81900@gmx.net> References: <20110223155644.81900@gmx.net> Message-ID: <9af4a39a858ac08e3e8457e9d3015db8.NginxMailingListEnglish@forum.nginx.org> Elena, would you please share your experiences with us? it's been 3 ears and now we (a small startup) want to create a production solution that can scale. Mongo still didn't release asynchronous driver and we want to go down nginx+nodejs script for gridfs (nginx-gridfs proejct also no longer supported). I'd love to hear from you if possible, Best, Alex on behalf of Reviyou team Posted at Nginx Forum: http://forum.nginx.org/read.php?2,177259,249246#msg-249246 From roberto at unbit.it Sat Apr 12 16:19:01 2014 From: roberto at unbit.it (Roberto De Ioris) Date: Sat, 12 Apr 2014 18:19:01 +0200 Subject: SlowFS Cache or Proxy_Cache for GridFS In-Reply-To: <9af4a39a858ac08e3e8457e9d3015db8.NginxMailingListEnglish@forum.nginx.org> References: <20110223155644.81900@gmx.net> <9af4a39a858ac08e3e8457e9d3015db8.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Elena, > would you please share your experiences with us? it's been 3 ears and now > we > (a small startup) want to create a production solution that can scale. > > Mongo still didn't release asynchronous driver and we want to go down > nginx+nodejs script for gridfs (nginx-gridfs proejct also no longer > supported). > > I'd love to hear from you if possible, You may be interested in this: http://uwsgi-docs.readthedocs.org/en/latest/GridFS.html -- Roberto De Ioris http://unbit.it From rpaprocki at fearnothingproductions.net Sat Apr 12 23:44:28 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 12 Apr 2014 16:44:28 -0700 Subject: nginx segfaulting with mod_security Message-ID: <5349CFDC.8010909@fearnothingproductions.net> Hello, I have compiled nginx-1.5.13 with modsecurity-2.7.7 and am seeing occasional segfaults when sending requests to the server. mod_security was compiled as a standalone module per the instructions made available at https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_NGINX. The segfaults appear sporadic and do not seem to match up with any given request. Below is my nginx configuration: [root at poseidon src]# nginx -V nginx version: nginx/1.5.13 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-debug --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-g -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m32 -march=i386 -mtune=generic -fasynchronous-unwind-tables -g -O0' --add-module=../modsecurity-apache_2.7.7/nginx/modsecurity/ Also, a backtrace of the core dump: (gdb) bt #0 0x080a1827 in ngx_http_write_filter (r=0x83bb078, in=0x8baaa6c) at src/http/ngx_http_write_filter_module.c:121 #1 0x080bc0d4 in ngx_http_chunked_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_chunked_filter_module.c:111 #2 0x080c462b in ngx_http_gzip_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_gzip_filter_module.c:325 #3 0x080c5fb3 in ngx_http_postpone_filter (r=0x83bb078, in=0x8baaa6c) at src/http/ngx_http_postpone_filter_module.c:82 #4 0x080c6581 in ngx_http_ssi_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_ssi_filter_module.c:408 #5 0x080cc021 in ngx_http_charset_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_charset_filter_module.c:553 #6 0x080ce31f in ngx_http_sub_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_sub_filter_module.c:201 #7 0x080cf730 in ngx_http_addition_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_addition_filter_module.c:147 #8 0x080cfc78 in ngx_http_gunzip_body_filter (r=0x83bb078, in=0x8baaa6c) at src/http/modules/ngx_http_gunzip_filter_module.c:184 #9 0x081146bd in ngx_http_modsecurity_body_filter (r=0x83bb078, in=0xbf7ff8b4) at ../modsecurity-apache_2.7.7/nginx/modsecurity//ngx_http_modsecurity.c:1252 #10 0x08055381 in ngx_output_chain (ctx=0x8baa9b8, in=0xbf7ff8b4) at src/core/ngx_output_chain.c:66 #11 0x080a253c in ngx_http_copy_filter (r=0x83bb078, in=0xbf7ff8b4) at src/http/ngx_http_copy_filter_module.c:143 #12 0x080bd477 in ngx_http_range_body_filter (r=0x83bb078, in=0xbf7ff8b4) at src/http/modules/ngx_http_range_filter_module.c:594 #13 0x0808e81e in ngx_http_output_filter (r=0x83bb078, in=0xbf7ff8b4) at src/http/ngx_http_core_module.c:1964 #14 0x0809c72f in ngx_http_send_special (r=0x83bb078, flags=1) at src/http/ngx_http_request.c:3332 #15 0x080b5737 in ngx_http_upstream_finalize_request (r=0x83bb078, u=0x83bbab0, rc=0) at src/http/ngx_http_upstream.c:3551 #16 0x080b4a77 in ngx_http_upstream_process_request (r=0x83bb078) at src/http/ngx_http_upstream.c:3159 #17 0x080b477e in ngx_http_upstream_process_upstream (r=0x83bb078, u=0x83bbab0) at src/http/ngx_http_upstream.c:3090 #18 0x080b329a in ngx_http_upstream_send_response (r=0x83bb078, u=0x83bbab0) at src/http/ngx_http_upstream.c:2493 #19 0x080b1937 in ngx_http_upstream_process_header (r=0x83bb078, u=0x83bbab0) at src/http/ngx_http_upstream.c:1735 #20 0x080b02ef in ngx_http_upstream_handler (ev=0x8b31f5c) at src/http/ngx_http_upstream.c:977 #21 0x080726fd in ngx_event_process_posted (cycle=0x83b45a8, posted=0x81c495c) at src/event/ngx_event_posted.c:40 #22 0x080708c2 in ngx_process_events_and_timers (cycle=0x83b45a8) at src/event/ngx_event.c:275 #23 0x0807c629 in ngx_worker_process_cycle (cycle=0x83b45a8, data=0x0) at src/os/unix/ngx_process_cycle.c:816 #24 0x080795a4 in ngx_spawn_process (cycle=0x83b45a8, proc=0x807c48e , data=0x0, name=0x815e33b "worker process", respawn=-3) at src/os/unix/ngx_process.c:198 #25 0x0807b720 in ngx_start_worker_processes (cycle=0x83b45a8, n=2, type=-3) at src/os/unix/ngx_process_cycle.c:364 #26 0x0807aecf in ngx_master_process_cycle (cycle=0x83b45a8) at src/os/unix/ngx_process_cycle.c:136 #27 0x080500c5 in main (argc=3, argv=0xbf7ffe54) at src/core/nginx.c:407 Unfortunately I am not skilled at reading c backtraces. I was going to attach the debug log but it's very large and I don't want to make thi message much larger :p Below is my nginx coniguration: user nginx; worker_processes 2; error_log /var/log/nginx/error.log debug; pid /var/run/nginx.pid; worker_rlimit_core 500M; working_directory /tmp; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; include /etc/nginx/conf.d/*.conf; fastcgi_buffers 256 4k; client_max_body_size 64m; #client_body_buffer_size 16m; server_tokens off; } server { listen 23.226.226.175:80; server_name cryptobells.com www.cryptobells.com; root /var/www/cryptobells; rewrite ^ https://$server_name$request_uri? permanent; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } } server { listen 23.226.226.175:443 ssl; server_name cryptobells.com www.cryptobells.com; ssl_certificate /etc/ssl/certs/cryptobells.com.crt; ssl_certificate_key /etc/ssl/certs/cryptobells.com.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers ECDHE-RSA-AES128-SHA256:AES128-GCM-SHA256:RC4:HIGH:!MD5:!aNULL:!EDH; ssl_prefer_server_ciphers on; root /var/www/cryptobells; ModSecurityEnabled on; ModSecurityConfig /etc/modsecurity/modsecurity.conf; location / { index index.php index.html index.htm; try_files $uri $uri/ /index.php?$args; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~* \.php$ { fastcgi_index index.php; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; } Please let me know if anyone is able to help identify what could be causing segfaults, ro if there is any more information I can provide. Thank you! From nginx at shimi.net Sun Apr 13 08:27:17 2014 From: nginx at shimi.net (shimi) Date: Sun, 13 Apr 2014 11:27:17 +0300 Subject: Issue with OCSP stapling when server certificate has been revoked by CA Message-ID: Hi, I'm contacting the list after doing some Google-foo and not finding anything - not sure if this is due to my searching skills, or because nobody ever asked about this... pardon me if it's a known issue, and a link to a relevant resource would be appreciated in such a case. I'm using Nginx as a reverse HTTP proxy to Tomcat, primarily for the purpose of doing OCSP stapling. When Nginx starts for the first time, and there's no cached OCSP response, the first client to try an OCSP will fail; I understand that this is by design, and I've overcome it by simply 'warming' the cached manually by using OpenSSL's s_client... of course I'll be happy to learn there's a way to make Nginx block and get OCSP response if there's a cache miss (I understand that blocking every time in case of OCSP server being down won't help performance much, but I guess cache can be negative in such a case, instead of a miss, and maybe this is already the case...) Anyways, that's not the main issue I have. The main issue I have is that when a revoked certificate is being used by Nginx, and an OCSP is being conducted against the server port where this certificate is served. Watching the packets arriving from ocsp.digicert.com via Wireshark, I see the OCSP response saying that the certificate is revoked (so, Nginx seems to be querying the OCSP server fine?), and I also see this in Nginx's error log: 2014/04/07 17:44:41 [error] 27005#0: certificate status "revoked" in the OCSP response while requesting certificate status, responder: ocsp.digicert.com Yet, the OpenSSL s_client, even after multiple attempts (so the cache should be "warm"), returns that no OCSP response was returned from the server... Naturally, I would expect the response to be proxied by Nginx back to the client. What am I missing / doing wrong? :) Thanks a lot! -- Shimi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Apr 13 10:17:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Apr 2014 14:17:47 +0400 Subject: nginx segfaulting with mod_security In-Reply-To: <5349CFDC.8010909@fearnothingproductions.net> References: <5349CFDC.8010909@fearnothingproductions.net> Message-ID: <20140413101747.GT34696@mdounin.ru> Hello! On Sat, Apr 12, 2014 at 04:44:28PM -0700, Robert Paprocki wrote: > Hello, > > I have compiled nginx-1.5.13 with modsecurity-2.7.7 and am seeing > occasional segfaults when sending requests to the server. mod_security > was compiled as a standalone module per the instructions made available > at > https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_NGINX. > The segfaults appear sporadic and do not seem to match up with any given > request. Below is my nginx configuration: [...] > Also, a backtrace of the core dump: > (gdb) bt > #0 0x080a1827 in ngx_http_write_filter (r=0x83bb078, in=0x8baaa6c) at > src/http/ngx_http_write_filter_module.c:121 This points to the following code line: cl->buf = ln->buf; That is, dereferencing ln->buf fails, which may only happen if the buffer chain ("in" argument) is broken. [...] > #8 0x080cfc78 in ngx_http_gunzip_body_filter (r=0x83bb078, in=0x8baaa6c) > at src/http/modules/ngx_http_gunzip_filter_module.c:184 > #9 0x081146bd in ngx_http_modsecurity_body_filter (r=0x83bb078, > in=0xbf7ff8b4) > at > ../modsecurity-apache_2.7.7/nginx/modsecurity//ngx_http_modsecurity.c:1252 > #10 0x08055381 in ngx_output_chain (ctx=0x8baa9b8, in=0xbf7ff8b4) at > src/core/ngx_output_chain.c:66 And this clearly shows that the buffer chain was chaned by mod_security output body filter. Note "in" argument of mod_security ("in=0xbf7ff8b4") and gunzip filter which follows it ("in=0x8baaa6c"). That is, from the backtrace it looks like mod_security changed the buffer chain and did it wrong, with a segfault as a result. -- Maxim Dounin http://nginx.org/ From punyahenry at gmail.com Sun Apr 13 10:28:29 2014 From: punyahenry at gmail.com (Henry Suhatman) Date: Sun, 13 Apr 2014 17:28:29 +0700 Subject: nginx segfaulting with mod_security In-Reply-To: <20140413101747.GT34696@mdounin.ru> References: <5349CFDC.8010909@fearnothingproductions.net> <20140413101747.GT34696@mdounin.ru> Message-ID: B. On Apr 13, 2014 5:18 PM, "Maxim Dounin" wrote: > Hello! > > On Sat, Apr 12, 2014 at 04:44:28PM -0700, Robert Paprocki wrote: > > > Hello, > > > > I have compiled nginx-1.5.13 with modsecurity-2.7.7 and am seeing > > occasional segfaults when sending requests to the server. mod_security > > was compiled as a standalone module per the instructions made available > > at > > > https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_NGINX > . > > The segfaults appear sporadic and do not seem to match up with any given > > request. Below is my nginx configuration: > > [...] > > > Also, a backtrace of the core dump: > > (gdb) bt > > #0 0x080a1827 in ngx_http_write_filter (r=0x83bb078, in=0x8baaa6c) at > > src/http/ngx_http_write_filter_module.c:121 > > This points to the following code line: > > cl->buf = ln->buf; > > That is, dereferencing ln->buf fails, which may only happen if the > buffer chain ("in" argument) is broken. > > [...] > > > #8 0x080cfc78 in ngx_http_gunzip_body_filter (r=0x83bb078, in=0x8baaa6c) > > at src/http/modules/ngx_http_gunzip_filter_module.c:184 > > #9 0x081146bd in ngx_http_modsecurity_body_filter (r=0x83bb078, > > in=0xbf7ff8b4) > > at > > > ../modsecurity-apache_2.7.7/nginx/modsecurity//ngx_http_modsecurity.c:1252 > > #10 0x08055381 in ngx_output_chain (ctx=0x8baa9b8, in=0xbf7ff8b4) at > > src/core/ngx_output_chain.c:66 > > And this clearly shows that the buffer chain was chaned by > mod_security output body filter. Note "in" argument of > mod_security ("in=0xbf7ff8b4") and gunzip filter which follows it > ("in=0x8baaa6c"). > > That is, from the backtrace it looks like mod_security changed the > buffer chain and did it wrong, with a segfault as a result. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Apr 13 10:39:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Apr 2014 14:39:25 +0400 Subject: Issue with OCSP stapling when server certificate has been revoked by CA In-Reply-To: References: Message-ID: <20140413103925.GU34696@mdounin.ru> Hello! On Sun, Apr 13, 2014 at 11:27:17AM +0300, shimi wrote: > Hi, > > I'm contacting the list after doing some Google-foo and not finding > anything - not sure if this is due to my searching skills, or because > nobody ever asked about this... pardon me if it's a known issue, and a link > to a relevant resource would be appreciated in such a case. > > I'm using Nginx as a reverse HTTP proxy to Tomcat, primarily for the > purpose of doing OCSP stapling. > > When Nginx starts for the first time, and there's no cached OCSP response, > the first client to try an OCSP will fail; I understand that this is by > design, and I've overcome it by simply 'warming' the cached manually by > using OpenSSL's s_client... of course I'll be happy to learn there's a way > to make Nginx block and get OCSP response if there's a cache miss (I > understand that blocking every time in case of OCSP server being down won't > help performance much, but I guess cache can be negative in such a case, > instead of a miss, and maybe this is already the case...) > > Anyways, that's not the main issue I have. > > The main issue I have is that when a revoked certificate is being used by > Nginx, and an OCSP is being conducted against the server port where this > certificate is served. > > Watching the packets arriving from ocsp.digicert.com via Wireshark, I see > the OCSP response saying that the certificate is revoked (so, Nginx seems > to be querying the OCSP server fine?), and I also see this in Nginx's error > log: > > 2014/04/07 17:44:41 [error] 27005#0: certificate status "revoked" in the > OCSP response while requesting certificate status, responder: > ocsp.digicert.com > > Yet, the OpenSSL s_client, even after multiple attempts (so the cache > should be "warm"), returns that no OCSP response was returned from the > server... > > Naturally, I would expect the response to be proxied by Nginx back to the > client. > > What am I missing / doing wrong? :) As long as no good OCSP response is received, nginx will not staple anything as it doesn't make sense (moreover, it may be harmful, e.g. if the response isn't verified). -- Maxim Dounin http://nginx.org/ From nginx at shimi.net Sun Apr 13 10:55:24 2014 From: nginx at shimi.net (shimi) Date: Sun, 13 Apr 2014 13:55:24 +0300 Subject: Issue with OCSP stapling when server certificate has been revoked by CA In-Reply-To: <20140413103925.GU34696@mdounin.ru> References: <20140413103925.GU34696@mdounin.ru> Message-ID: On Sun, Apr 13, 2014 at 1:39 PM, Maxim Dounin wrote: Hello! > > As long as no good OCSP response is received, nginx will not > staple anything as it doesn't make sense (moreover, it may be > harmful, e.g. if the response isn't verified). > > > Hello! Thank you for your answer. So I understand this is a deliberate behavior by nginx and not a bug. Followup question, then, if I may: By "good", do you mean "positive"? i.e. "we have verified that the certificate is OK and valid"? I'm not sure I understand why is it good idea not to tell the client that the certificate is known and has been revoked... the purpose (as I understand OCSP stapling) is to verify the cert is OK. Wouldn't returning no-response to a client might cause it to think it may be an intermittent issue with accessing OCSP, and thus "soft-fail" and trust the (revoked) cert "for now" until a proper response can be obtained? And if that is the case, wouldn't passing the negative response from the OCSP server immediately tell the client that something is fishy? (i.e. someone is MITM'ing the innocent user with a cert using a stolen key that was revoked by the real owner? The recent heartbleed bug is an excellent example...). Sounds like a security issue to me, but again, I may be missing something? Let's say I want to proxy the response despite it being possibly harmful (in a way I do not yet understand :) ) - is that something straightforward as removing an 'if (revoked)' from somewhere in the source code, or would I need to hire some Nginx code expert to change this behavior? By the way, if it's actually the spec (RFC) that says that you're not supposed to staple such responses, I'm also very fine with that. But if not, it would sound weird to me that Nginx decides to handle them in a special way.... Thanks again, -- Shimi -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun Apr 13 15:11:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 13 Apr 2014 19:11:31 +0400 Subject: Issue with OCSP stapling when server certificate has been revoked by CA In-Reply-To: References: <20140413103925.GU34696@mdounin.ru> Message-ID: <20140413151131.GV34696@mdounin.ru> Hello! On Sun, Apr 13, 2014 at 01:55:24PM +0300, shimi wrote: > On Sun, Apr 13, 2014 at 1:39 PM, Maxim Dounin wrote: > > Hello! > > > > As long as no good OCSP response is received, nginx will not > > staple anything as it doesn't make sense (moreover, it may be > > harmful, e.g. if the response isn't verified). > > > > > > > Hello! > > Thank you for your answer. So I understand this is a deliberate behavior by > nginx and not a bug. > > Followup question, then, if I may: > > By "good", do you mean "positive"? i.e. "we have verified that the > certificate is OK and valid"? I mean "good" as specified here: http://tools.ietf.org/html/rfc6960#section-2.2 > I'm not sure I understand why is it good idea not to tell the client that > the certificate is known and has been revoked... the purpose (as I > understand OCSP stapling) is to verify the cert is OK. Wouldn't returning > no-response to a client might cause it to think it may be an intermittent > issue with accessing OCSP, and thus "soft-fail" and trust the (revoked) > cert "for now" until a proper response can be obtained? And if that is the > case, wouldn't passing the negative response from the OCSP server > immediately tell the client that something is fishy? (i.e. someone is > MITM'ing the innocent user with a cert using a stolen key that was revoked > by the real owner? The recent heartbleed bug is an excellent example...). > Sounds like a security issue to me, but again, I may be missing something? An attacker can and will do the same. And nginx behaviour does not limit an attacker in any way. -- Maxim Dounin http://nginx.org/ From nginx at shimi.net Sun Apr 13 16:00:24 2014 From: nginx at shimi.net (shimi) Date: Sun, 13 Apr 2014 19:00:24 +0300 Subject: Issue with OCSP stapling when server certificate has been revoked by CA In-Reply-To: <20140413151131.GV34696@mdounin.ru> References: <20140413103925.GU34696@mdounin.ru> <20140413151131.GV34696@mdounin.ru> Message-ID: On Sun, Apr 13, 2014 at 6:11 PM, Maxim Dounin wrote: > Hello! > > On Sun, Apr 13, 2014 at 01:55:24PM +0300, shimi wrote: > > > On Sun, Apr 13, 2014 at 1:39 PM, Maxim Dounin > wrote: > > > > Hello! > > > > > > As long as no good OCSP response is received, nginx will not > > > staple anything as it doesn't make sense (moreover, it may be > > > harmful, e.g. if the response isn't verified). > > > > > > > > > > > Hello! > > > > Thank you for your answer. So I understand this is a deliberate behavior > by > > nginx and not a bug. > > > > Followup question, then, if I may: > > > > By "good", do you mean "positive"? i.e. "we have verified that the > > certificate is OK and valid"? > > I mean "good" as specified here: > > http://tools.ietf.org/html/rfc6960#section-2.2 > > > I'm not sure I understand why is it good idea not to tell the client that > > the certificate is known and has been revoked... the purpose (as I > > understand OCSP stapling) is to verify the cert is OK. Wouldn't returning > > no-response to a client might cause it to think it may be an intermittent > > issue with accessing OCSP, and thus "soft-fail" and trust the (revoked) > > cert "for now" until a proper response can be obtained? And if that is > the > > case, wouldn't passing the negative response from the OCSP server > > immediately tell the client that something is fishy? (i.e. someone is > > MITM'ing the innocent user with a cert using a stolen key that was > revoked > > by the real owner? The recent heartbleed bug is an excellent example...). > > Sounds like a security issue to me, but again, I may be missing > something? > > An attacker can and will do the same. And nginx behaviour does > not limit an attacker in any way. > Good point! I must be tired for having raised that scenario to begin with :-) Thanks again for all your answers! -- Shimi -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Apr 13 19:25:35 2014 From: nginx-forum at nginx.us (cybermass) Date: Sun, 13 Apr 2014 15:25:35 -0400 Subject: Problems with PHP authentication imap/smtp proxy Message-ID: <8f5432ee9f96027db19879b93bd7b9cd.NginxMailingListEnglish@forum.nginx.org> I managed to write my php auth script but still having problems authenticating. Also this is what I see in the logs: [error] 22014#0: *3234 recv() failed (111: Connection refused) while in http auth state, client: back.end.ip server: 0.0.0.0:993, login: "user at domain.com" Also do I call this script with the following auth_http line? I never see anything listening on 9000. Where is this 9000 coming from? I just see everyone using it: auth_http 127.0.0.1:9000/mail/auth.php; ------------ auth.php setAttribute(PDO::ATTR_ERRMODE, PDO::ERRMODE_EXCEPTION); $db->setAttribute(PDO::ATTR_EMULATE_PREPARES, false); if (!isset($_SERVER["HTTP_AUTH_USER"] ) || !isset($_SERVER["HTTP_AUTH_PASS"] )) { fail(); } $username = $_SERVER["HTTP_AUTH_USER"] ; $userpass = $_SERVER["HTTP_AUTH_PASS"] ; $protocol = $_SERVER["HTTP_AUTH_PROTOCOL"] ; $backend_port = ""; if ($protocol == "imap") { $backend_port = 993; } if ($protocol == "smtp") { $backend_port = 25; } // nginx likes ip address so if your // application gives back hostname, convert it to ip address here $backend_ip = "back.end.ip"; // Authenticate the user or fail if (!authuser($username,$userpass)) { fail(); exit; } // Get the server for this user if we have reached so far $userserver = getmailserver($username); // Get the ip address of the server // We are assuming that your backend returns hostname // We try to get the ip else return what we got back $server_ip = (isset($backend_ip[$userserver]))?$backend_ip[$userserver] :$userserver; // Pass! pass($server_ip, $backend_port); //END function authuser($user,$pass) { global $db; $stmt = $db->prepare("SELECT password FROM users WHERE email=:email LIMIT 1"); $stmt->bindValue(':email',$username,PDO::PARAM_STR); $stmt->execute(); $dbpass = $stmt->fetchColumn(); return ($dbpass === $pass); } function getmailserver($user) { return $backend_ip; } } function fail(){ header("Auth-Status: Invalid login or password"); exit; } function pass($server,$port) { header("Auth-Status: OK"); header("Auth-Server: $server"); header("Auth-Port: $port"); exit; } ?> ======================================================== nginx.conf (my http section is fine as I use it for my backend apache) mail { server_name mx1.domain.com; #auth_http unix:/path/socket:/cgi-bin/auth; auth_http 127.0.0.1:9000/mail/auth.php; proxy on; ssl_prefer_server_ciphers on; ssl_protocols TLSv1 SSLv3; ssl_ciphers HIGH:!ADH:!MD5:@STRENGTH; ssl_session_cache shared:TLSSL:16m; ssl_session_timeout 10m; ssl_certificate ssl/ug-mail.crt; ssl_certificate_key ssl/private/ug-mail.key; imap_capabilities "IMAP4rev1 UIDPLUS"; smtp_capabilities "PIPELINING 8BITMIME DSN"; # smtp_auth plain login; # imap_auth plain login; server { listen 25; protocol smtp; timeout 120000; } server { listen 8825; protocol smtp; starttls on; } server { listen 993; protocol imap; ssl on; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249256,249256#msg-249256 From lists at ruby-forum.com Sun Apr 13 23:02:12 2014 From: lists at ruby-forum.com (Shawn Za) Date: Mon, 14 Apr 2014 01:02:12 +0200 Subject: IMAP: auth_http In-Reply-To: <20130311112948.GU15378@mdounin.ru> References: <20130310164031.GP15378@mdounin.ru> <20130311112948.GU15378@mdounin.ru> Message-ID: <2b03863940188cb987cd36576c52eb10@ruby-forum.com> Does this location snippet go inside the mail directive or outside of it in nginx.conf? I only have 1 backend, I am using it to proxy for http but also need imap/smtp proxy for it. When i use location inside the mail directive and reload nginx, it throws an error Maxim Dounin wrote in post #1101072: > Hello! > > On Sun, Mar 10, 2013 at 02:43:11PM -0700, Grant wrote: > >> > >> > http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript >> >> In that case I request for nginx's imap proxy to function more like >> imapproxy which is easier to set up. > > The goal of nginx imap proxy is to route client's connections to > different backends, which is very different from what imapproxy > does. It's more like a perdition. > > If you want nginx to just proxy all connections to a predefined > backend server, you may use something like > > location = /mailauth { > add_header Auth-Status OK; > add_header Auth-Server 127.0.0.1; > add_header Auth-Port 8143; > add_header Auth-Wait 1; > return 204; > } > > as a dummy auth script. > > -- > Maxim Dounin > http://nginx.org/en/donation.html -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Mon Apr 14 01:37:05 2014 From: lists at ruby-forum.com (Shawn Za) Date: Mon, 14 Apr 2014 03:37:05 +0200 Subject: nginx imaps auth_http dovecot In-Reply-To: <20110430105835.GC42265@mdounin.ru> References: <4D73FB1E.3090104@alokat.org> <3f29c76ca3fec62ea3f4f3423b70de4e.NginxMailingListEnglish@forum.nginx.org> <20110430105835.GC42265@mdounin.ru> Message-ID: Does this mean that from the nginx proxy to the backend, the passwords will fly through the internet wide open if the backend is a remote machine? Maxim Dounin wrote in post #995934: > 2. SSL backends isn't supported by nginx mail proxy, you need > non-ssl backend and direct nginx to it. > > BTW, looking into error_log usually helps a lot. > > Maxim Dounin -- Posted via http://www.ruby-forum.com/. From rpaprocki at fearnothingproductions.net Mon Apr 14 03:42:04 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sun, 13 Apr 2014 20:42:04 -0700 Subject: nginx segfaulting with mod_security In-Reply-To: <20140413101747.GT34696@mdounin.ru> References: <5349CFDC.8010909@fearnothingproductions.net> <20140413101747.GT34696@mdounin.ru> Message-ID: <534B590C.3090406@fearnothingproductions.net> Hi Maxim! Thank you for your response, always nice to actually hear back from someone knowledgeable. Once thing I had noticed while looking at backtraces (coredumps seem to indicate segfaults occurring in a number of places, not just filter_module.c:121) was that every bt seemed to include gzip modules as well. i disabled gzip in both my server and http sections but this did not stop the segfaults (which is interesting, but not entirely surprising, given that even when I had set SecRuleEngine to off in my modsecurity.conf file, segfaults still occured... so even the mere presence of ModSecurityEnabled in the nginx configuration was leading to a break). I have since recompiled nginx without both gunzip module and gzip static module, and have not yet gotten any segfaults. i will continue to research this and update if I have any more information On 04/13/2014 03:17 AM, Maxim Dounin wrote: > Hello! > > On Sat, Apr 12, 2014 at 04:44:28PM -0700, Robert Paprocki wrote: > >> Hello, >> >> I have compiled nginx-1.5.13 with modsecurity-2.7.7 and am seeing >> occasional segfaults when sending requests to the server. mod_security >> was compiled as a standalone module per the instructions made available >> at >> https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_NGINX. >> The segfaults appear sporadic and do not seem to match up with any given >> request. Below is my nginx configuration: > > [...] > >> Also, a backtrace of the core dump: >> (gdb) bt >> #0 0x080a1827 in ngx_http_write_filter (r=0x83bb078, in=0x8baaa6c) at >> src/http/ngx_http_write_filter_module.c:121 > > This points to the following code line: > > cl->buf = ln->buf; > > That is, dereferencing ln->buf fails, which may only happen if the > buffer chain ("in" argument) is broken. > > [...] > >> #8 0x080cfc78 in ngx_http_gunzip_body_filter (r=0x83bb078, in=0x8baaa6c) >> at src/http/modules/ngx_http_gunzip_filter_module.c:184 >> #9 0x081146bd in ngx_http_modsecurity_body_filter (r=0x83bb078, >> in=0xbf7ff8b4) >> at >> ../modsecurity-apache_2.7.7/nginx/modsecurity//ngx_http_modsecurity.c:1252 >> #10 0x08055381 in ngx_output_chain (ctx=0x8baa9b8, in=0xbf7ff8b4) at >> src/core/ngx_output_chain.c:66 > > And this clearly shows that the buffer chain was chaned by > mod_security output body filter. Note "in" argument of > mod_security ("in=0xbf7ff8b4") and gunzip filter which follows it > ("in=0x8baaa6c"). > > That is, from the backtrace it looks like mod_security changed the > buffer chain and did it wrong, with a segfault as a result. > From nginx-forum at nginx.us Mon Apr 14 04:38:45 2014 From: nginx-forum at nginx.us (reviyou) Date: Mon, 14 Apr 2014 00:38:45 -0400 Subject: SlowFS Cache or Proxy_Cache for GridFS In-Reply-To: References: Message-ID: Thanks Roberto, since we are not using python in our stack I just realized that we can keep async script\route as part of our play application and only add proxy_cache nginx configuration for this route. I think we can even leave it without the same app server and application, would be simpler to maintain and we can always scale it by adding a few more app servers just for image processing if needed. thanks, Alex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,177259,249262#msg-249262 From makailol7 at gmail.com Mon Apr 14 05:12:43 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 14 Apr 2014 10:42:43 +0530 Subject: the http output chain is empty. Message-ID: Hello! I have been getting few of this error in the Nginx log file. Could anyone have Idea what could be the reason for this error and how to fix it? Nginx version is 1.5.12 . Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Mon Apr 14 07:13:14 2014 From: lists at ruby-forum.com (Shawn Za) Date: Mon, 14 Apr 2014 09:13:14 +0200 Subject: IMAP: auth_http In-Reply-To: <2b03863940188cb987cd36576c52eb10@ruby-forum.com> References: <20130310164031.GP15378@mdounin.ru> <20130311112948.GU15378@mdounin.ru> <2b03863940188cb987cd36576c52eb10@ruby-forum.com> Message-ID: <4e937587c4ab9e721d796ca420006812@ruby-forum.com> This dummy auth script has been the ONLY way I can get my imap or smtp proxy working! The problem is that I can only have either imap or smtp. The block below works great, I just put my backend server (remote location) instead of 127.0.0.1, and my Auth-Port is 143. I see that nginx accepts SSL or starttls but the backend must run 143 and no SSL. Took forever to get that working. Is there a way I can make it so that I have some kind of condition running based on imap or smtp protocol? I tried writing a php script but everytime I call it with auth_http it does not work. Don't even ask why, I tried so many ways. There must be a way without having to call my database directly. The dummy auth script worked wonders but as mentioned, I need both imap and smtp working and the one below is just for imap. Please help me get this working! >> location = /mailauth { >> add_header Auth-Status OK; >> add_header Auth-Server 127.0.0.1; >> add_header Auth-Port 8143; >> add_header Auth-Wait 1; >> return 204; >> } >> >> as a dummy auth script. >> >> -- >> Maxim Dounin >> http://nginx.org/en/donation.html -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Mon Apr 14 09:03:32 2014 From: nginx-forum at nginx.us (arnas) Date: Mon, 14 Apr 2014 05:03:32 -0400 Subject: _GET parameters with question and ampersand Message-ID: <49dba52c8c1a71a8800f26993ce62fee.NginxMailingListEnglish@forum.nginx.org> Hi, I am wondering how do I solve parameters being passed correctly after switching from apache. http://app.mailerlite2.com/test/test_get/?labas=testarg NOT passing http://app.mailerlite2.com/test/test_get/&labas=testarg PASS # 1 server, proxy passes to internal server #2 server { listen X.X.X.X:80; server_name _; location / { proxy_pass http://192.168.53.30; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; } } # 2 server server { listen 192.168.53.30:80; server_name ~^(?P.+)\.mailerlite2.com; root /var/www/html/mailerlite2/public_html; index index.php; try_files $uri $uri/ $uri/index.php /index.php; access_log /var/log/nginx/access.log; client_max_body_size 300M; location ~* /data/emails/ { root /var/www/html; expires 7d; access_log off; add_header Cache-Control "public"; location ~* \.php$ {return 403;} } location ~* "^/data/custom_html" { root /var/www/html; location ~* \.php$ {return 403;} } location ~ \.php$ { try_files $uri =404; include fastcgi.conf; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\. { deny all; } location / { # Directives to send expires headers and turn off 404 error logging. location ~* ^.+\.(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { root /var/www/html/mailerlite2/public_html; expires max; access_log off; log_not_found off; add_header Cache-Control "public"; } location ~* ^.+\.(css|js)$ { root /var/www/html/mailerlite2/public_html; expires 72h; access_log off; add_header Cache-Control "public"; } } http://p.ngx.cc/170041499f94175e Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249267,249267#msg-249267 From nginx-forum at nginx.us Mon Apr 14 09:58:27 2014 From: nginx-forum at nginx.us (mex) Date: Mon, 14 Apr 2014 05:58:27 -0400 Subject: nginx + alternate ssl-libs Message-ID: <8014a2e2719b9e196cebc950ecefee70.NginxMailingListEnglish@forum.nginx.org> i'm seen the question below on nginx-dev from september last year, http://forum.nginx.org/read.php?29,243031,243031#msg-243031 I've seen some attempts to use polarssl one year ago and would like to restart delevopment in that direction, so i'd like to re-issue this question from Aleksandar Lazic: ------------------------------------------------ Are there any plans to add another SSL-Library into nginx? [ ] axtls http://axtls.sourceforge.net/ [ ] cyassl http://www.wolfssl.com/yaSSL/Home.html [ ] gnutls http://www.gnutls.org/ [ ] polarssl https://polarssl.org/ [ ] other: ... ------------------------------------------------ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249269,249269#msg-249269 From mdounin at mdounin.ru Mon Apr 14 10:44:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Apr 2014 14:44:22 +0400 Subject: nginx segfaulting with mod_security In-Reply-To: <534B590C.3090406@fearnothingproductions.net> References: <5349CFDC.8010909@fearnothingproductions.net> <20140413101747.GT34696@mdounin.ru> <534B590C.3090406@fearnothingproductions.net> Message-ID: <20140414104422.GW34696@mdounin.ru> Hello! On Sun, Apr 13, 2014 at 08:42:04PM -0700, Robert Paprocki wrote: > Hi Maxim! > > Thank you for your response, always nice to actually hear back from > someone knowledgeable. Once thing I had noticed while looking at > backtraces (coredumps seem to indicate segfaults occurring in a number > of places, not just filter_module.c:121) was that every bt seemed to > include gzip modules as well. i disabled gzip in both my server and http > sections but this did not stop the segfaults (which is interesting, but All filters, including gzip, are expected to be in backtraces, as they are always called. Depending on configuration, they either do something or not. > not entirely surprising, given that even when I had set SecRuleEngine to > off in my modsecurity.conf file, segfaults still occured... so even the > mere presence of ModSecurityEnabled in the nginx configuration was > leading to a break). > > I have since recompiled nginx without both gunzip module and gzip static > module, and have not yet gotten any segfaults. i will continue to > research this and update if I have any more information As per backtrace, it's more or less obvious that the problem is in modsecurity. Compiling out other modules may result in less frequent segfaults, but it won't fix the bug in modsecurity. If you want to actually fix the problem, you should either switch off / compile out modsecurity, or find and fix the bug in it. > > On 04/13/2014 03:17 AM, Maxim Dounin wrote: > > Hello! > > > > On Sat, Apr 12, 2014 at 04:44:28PM -0700, Robert Paprocki wrote: > > > >> Hello, > >> > >> I have compiled nginx-1.5.13 with modsecurity-2.7.7 and am seeing > >> occasional segfaults when sending requests to the server. mod_security > >> was compiled as a standalone module per the instructions made available > >> at > >> https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_NGINX. > >> The segfaults appear sporadic and do not seem to match up with any given > >> request. Below is my nginx configuration: > > > > [...] > > > >> Also, a backtrace of the core dump: > >> (gdb) bt > >> #0 0x080a1827 in ngx_http_write_filter (r=0x83bb078, in=0x8baaa6c) at > >> src/http/ngx_http_write_filter_module.c:121 > > > > This points to the following code line: > > > > cl->buf = ln->buf; > > > > That is, dereferencing ln->buf fails, which may only happen if the > > buffer chain ("in" argument) is broken. > > > > [...] > > > >> #8 0x080cfc78 in ngx_http_gunzip_body_filter (r=0x83bb078, in=0x8baaa6c) > >> at src/http/modules/ngx_http_gunzip_filter_module.c:184 > >> #9 0x081146bd in ngx_http_modsecurity_body_filter (r=0x83bb078, > >> in=0xbf7ff8b4) > >> at > >> ../modsecurity-apache_2.7.7/nginx/modsecurity//ngx_http_modsecurity.c:1252 > >> #10 0x08055381 in ngx_output_chain (ctx=0x8baa9b8, in=0xbf7ff8b4) at > >> src/core/ngx_output_chain.c:66 > > > > And this clearly shows that the buffer chain was chaned by > > mod_security output body filter. Note "in" argument of > > mod_security ("in=0xbf7ff8b4") and gunzip filter which follows it > > ("in=0x8baaa6c"). > > > > That is, from the backtrace it looks like mod_security changed the > > buffer chain and did it wrong, with a segfault as a result. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 14 10:56:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Apr 2014 14:56:50 +0400 Subject: the http output chain is empty. In-Reply-To: References: Message-ID: <20140414105650.GX34696@mdounin.ru> Hello! On Mon, Apr 14, 2014 at 10:42:43AM +0530, Makailol Charls wrote: > Hello! > > I have been getting few of this error in the Nginx log file. Could anyone > have Idea what could be the reason for this error and how to fix it? > > Nginx version is 1.5.12 . There is a bug somewhere. Some debugging hints can be found here: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 14 11:05:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Apr 2014 15:05:06 +0400 Subject: nginx imaps auth_http dovecot In-Reply-To: References: <4D73FB1E.3090104@alokat.org> <3f29c76ca3fec62ea3f4f3423b70de4e.NginxMailingListEnglish@forum.nginx.org> <20110430105835.GC42265@mdounin.ru> Message-ID: <20140414110506.GY34696@mdounin.ru> Hello! On Mon, Apr 14, 2014 at 03:37:05AM +0200, Shawn Za wrote: > Does this mean that from the nginx proxy to the backend, the passwords > will fly through the internet wide open if the backend is a remote > machine? Nobody stops you from providing secure network in-between, e.g. with ipsec or ssl tunnel. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 14 11:06:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Apr 2014 15:06:42 +0400 Subject: Problems with PHP authentication imap/smtp proxy In-Reply-To: <8f5432ee9f96027db19879b93bd7b9cd.NginxMailingListEnglish@forum.nginx.org> References: <8f5432ee9f96027db19879b93bd7b9cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140414110642.GZ34696@mdounin.ru> Hello! On Sun, Apr 13, 2014 at 03:25:35PM -0400, cybermass wrote: > I managed to write my php auth script but still having problems > authenticating. > Also this is what I see in the logs: > [error] 22014#0: *3234 recv() failed (111: Connection refused) while in http > auth state, client: back.end.ip server: 0.0.0.0:993, login: > "user at domain.com" > > Also do I call this script with the following auth_http line? I never see > anything listening on 9000. Where is this 9000 coming from? I just see > everyone using it: > > auth_http 127.0.0.1:9000/mail/auth.php; It's just a random port number, which is expected to be used by a HTTP server which is capable of running your auth script. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Mon Apr 14 14:30:48 2014 From: lists at ruby-forum.com (Roger Pack) Date: Mon, 14 Apr 2014 16:30:48 +0200 Subject: auto send email on 50 Message-ID: <0cfaf7643c97e9ffa3d5d549cee1c668@ruby-forum.com> Hello. It might be nice if nginx included some type of logging functionality like...if something [like fastcgi, etc.] returns a 500 error, it sends off an email message to alert an admin. Something like that. Anyway, thanks for your awesome product. -roger- -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Mon Apr 14 14:41:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Apr 2014 18:41:33 +0400 Subject: auto send email on 50 In-Reply-To: <0cfaf7643c97e9ffa3d5d549cee1c668@ruby-forum.com> References: <0cfaf7643c97e9ffa3d5d549cee1c668@ruby-forum.com> Message-ID: <20140414144133.GA34696@mdounin.ru> Hello! On Mon, Apr 14, 2014 at 04:30:48PM +0200, Roger Pack wrote: > Hello. It might be nice if nginx included some type of logging > functionality like...if something [like fastcgi, etc.] returns a 500 > error, it sends off an email message to alert an admin. Something like > that. Your favorite log monitoring tool will do that for you. If you are already using some monitoring software, it probably has something built in and/or available as a plugin. -- Maxim Dounin http://nginx.org/ From spameden at gmail.com Mon Apr 14 15:01:19 2014 From: spameden at gmail.com (spameden) Date: Mon, 14 Apr 2014 19:01:19 +0400 Subject: auto send email on 50 In-Reply-To: <20140414144133.GA34696@mdounin.ru> References: <0cfaf7643c97e9ffa3d5d549cee1c668@ruby-forum.com> <20140414144133.GA34696@mdounin.ru> Message-ID: You can use log2mail tool to send email reports. It's very tiny and written in c++, not supported anymore, but still works just fine. 2014-04-14 18:41 GMT+04:00 Maxim Dounin : > Hello! > > On Mon, Apr 14, 2014 at 04:30:48PM +0200, Roger Pack wrote: > > > Hello. It might be nice if nginx included some type of logging > > functionality like...if something [like fastcgi, etc.] returns a 500 > > error, it sends off an email message to alert an admin. Something like > > that. > > Your favorite log monitoring tool will do that for you. > If you are already using some monitoring software, it probably > has something built in and/or available as a plugin. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Mon Apr 14 17:47:04 2014 From: lists at ruby-forum.com (Shawn Za) Date: Mon, 14 Apr 2014 19:47:04 +0200 Subject: Problems with PHP authentication imap/smtp proxy In-Reply-To: <8f5432ee9f96027db19879b93bd7b9cd.NginxMailingListEnglish@forum.nginx.org> References: <8f5432ee9f96027db19879b93bd7b9cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <627f0882c58f338dccfcd2673bcec804@ruby-forum.com> Im still not able to call this script. Is there something I need to define in the http { section for php? I have not done that. I tried adding another server { block inside the http block to listen to 127.0.0.1:9000 but still cant call my php script. nginx does not know how to use php. I do have php5-fpm installed and running. Any help would be appreciated. I just need to be able to use my auth.php with auth_http. Thanks. -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Mon Apr 14 18:17:05 2014 From: lists at ruby-forum.com (Roger Pack) Date: Mon, 14 Apr 2014 20:17:05 +0200 Subject: auto send email on 50 In-Reply-To: <0cfaf7643c97e9ffa3d5d549cee1c668@ruby-forum.com> References: <0cfaf7643c97e9ffa3d5d549cee1c668@ruby-forum.com> Message-ID: <2bedc165df617ec73d45ea29628e18b2@ruby-forum.com> OK thanks for the responses. -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Mon Apr 14 19:03:54 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 14 Apr 2014 15:03:54 -0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: References: Message-ID: Fyi. if you are running a ssl tunnel like stunnel with openssl 0.9.x, this attack is logged as "SSL3_GET_RECORD:wrong version number" as opposed to no nginx/openssl logging. If you have logging going back 2 years and you are seeing these log entries now, you may be able to detect attacks from before 7-4-2014. Here we have many stunnels with openssl 0.9.x and found the first attacks at: 2014.04.08 22:19:14 (CET) in more then 2 years of logging. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249102,249288#msg-249288 From freetgm at gmail.com Mon Apr 14 19:16:37 2014 From: freetgm at gmail.com (freetg) Date: Tue, 15 Apr 2014 03:16:37 +0800 Subject: $request_uri in rewrite and try_files Message-ID: Hi, When I try to use rewrite to replace try_files, my php application report error. Finally I find $_SERVER["PATH_INFO"] is different between try_files and rewrite. nginx/1.4.4 PHP 5.4.22 try_files: try_files $uri $uri/ /index.php$request_uri; url: http://example.com/a/b/c?d=1 , $_SERVER["PATH_INFO"] is /a/b/c but, when I use rewrite: if ( $host = "example.com" ) { set $example 1$example; } if ( $uri !~ \.(ico|gif|png|jpeg|css|js|xml|html|shtml|swf|mp3) ) { set $example 11$example; } if ( $example = 111 ) { rewrite ^ /index.php$request_uri last; } url: http://example.com/a/b/c?d=1 , $_SERVER["PATH_INFO"] is /a/b/c?d=1 Obviously try_files is right , I can use $uri in rewrite to solve this problem. My question is why $request_uri in try_files and rewrite but get different $_SERVER["PATH_INFO"] ? thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 14 20:07:24 2014 From: nginx-forum at nginx.us (fijfajfu) Date: Mon, 14 Apr 2014 16:07:24 -0400 Subject: Nginx reverse proxy on another web server with redirection Message-ID: Hi all, I have the problem with configuring nginx to perform proxy redirection to web server that appends some its local path to redirected address. As a result I get url of my server (where nginx is running) but with path (from another machine) appended. The configuration is: location ~* ^/3rdparty/(.*)___(.*)___(.*)___(.*)$ { proxy_pass https://$1.$2.$3.$4?$args; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } When I type this: http:// 192.168.237.208/3rdparty/192___168___237___222 I would expect content from here: https://192.168.237.222 but I get attempt from server to get content from here - which doesn't exist: http://192.168.237.208/3rdparty/192___168___237___222/users/sign_in How to remove /users/sign_in sufix but to process that within nginx server? When I go directly to target server typing this: https://192.168.237.222 I get redirected to https://192.168.237.222/users/sign_in which brings the right content. Thank you in advance for you comments/help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249291,249291#msg-249291 From nginx-forum at nginx.us Mon Apr 14 20:10:41 2014 From: nginx-forum at nginx.us (fijfajfu) Date: Mon, 14 Apr 2014 16:10:41 -0400 Subject: Nginx reverse proxy on another web server with redirection In-Reply-To: References: Message-ID: I have also added more detailed description with system diagram at: http://stackoverflow.com/questions/23068740/nginx-reverse-proxy-on-another-web-server-with-redirection Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249291,249292#msg-249292 From rpaprocki at fearnothingproductions.net Mon Apr 14 20:30:06 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 14 Apr 2014 13:30:06 -0700 Subject: nginx segfaulting with mod_security In-Reply-To: <20140414104422.GW34696@mdounin.ru> References: <5349CFDC.8010909@fearnothingproductions.net> <20140413101747.GT34696@mdounin.ru> <534B590C.3090406@fearnothingproductions.net> <20140414104422.GW34696@mdounin.ru> Message-ID: <534C454E.4070700@fearnothingproductions.net> I realized that I spoke too soon about gzip shortly after sending this. My apologies for making a silly assumption; I am a sysadmin by trade and not so skilled at developing or troubleshooting complex server software. I've contact mod_security mailing list but haven't heard back from them. Thank you for your time and patience in answering my questions! Sincerely, Robert Paprocki On 04/14/2014 03:44 AM, Maxim Dounin wrote: > Hello! > > On Sun, Apr 13, 2014 at 08:42:04PM -0700, Robert Paprocki wrote: > >> Hi Maxim! >> >> Thank you for your response, always nice to actually hear back from >> someone knowledgeable. Once thing I had noticed while looking at >> backtraces (coredumps seem to indicate segfaults occurring in a number >> of places, not just filter_module.c:121) was that every bt seemed to >> include gzip modules as well. i disabled gzip in both my server and http >> sections but this did not stop the segfaults (which is interesting, but > All filters, including gzip, are expected to be in backtraces, as > they are always called. Depending on configuration, they either > do something or not. > >> not entirely surprising, given that even when I had set SecRuleEngine to >> off in my modsecurity.conf file, segfaults still occured... so even the >> mere presence of ModSecurityEnabled in the nginx configuration was >> leading to a break). >> >> I have since recompiled nginx without both gunzip module and gzip static >> module, and have not yet gotten any segfaults. i will continue to >> research this and update if I have any more information > As per backtrace, it's more or less obvious that the problem is in > modsecurity. Compiling out other modules may result in less > frequent segfaults, but it won't fix the bug in modsecurity. > > If you want to actually fix the problem, you should either > switch off / compile out modsecurity, or find and fix the bug in > it. > >> On 04/13/2014 03:17 AM, Maxim Dounin wrote: >>> Hello! >>> >>> On Sat, Apr 12, 2014 at 04:44:28PM -0700, Robert Paprocki wrote: >>> >>>> Hello, >>>> >>>> I have compiled nginx-1.5.13 with modsecurity-2.7.7 and am seeing >>>> occasional segfaults when sending requests to the server. mod_security >>>> was compiled as a standalone module per the instructions made available >>>> at >>>> https://github.com/SpiderLabs/ModSecurity/wiki/Reference-Manual#Installation_for_NGINX. >>>> The segfaults appear sporadic and do not seem to match up with any given >>>> request. Below is my nginx configuration: >>> [...] >>> >>>> Also, a backtrace of the core dump: >>>> (gdb) bt >>>> #0 0x080a1827 in ngx_http_write_filter (r=0x83bb078, in=0x8baaa6c) at >>>> src/http/ngx_http_write_filter_module.c:121 >>> This points to the following code line: >>> >>> cl->buf = ln->buf; >>> >>> That is, dereferencing ln->buf fails, which may only happen if the >>> buffer chain ("in" argument) is broken. >>> >>> [...] >>> >>>> #8 0x080cfc78 in ngx_http_gunzip_body_filter (r=0x83bb078, in=0x8baaa6c) >>>> at src/http/modules/ngx_http_gunzip_filter_module.c:184 >>>> #9 0x081146bd in ngx_http_modsecurity_body_filter (r=0x83bb078, >>>> in=0xbf7ff8b4) >>>> at >>>> ../modsecurity-apache_2.7.7/nginx/modsecurity//ngx_http_modsecurity.c:1252 >>>> #10 0x08055381 in ngx_output_chain (ctx=0x8baa9b8, in=0xbf7ff8b4) at >>>> src/core/ngx_output_chain.c:66 >>> And this clearly shows that the buffer chain was chaned by >>> mod_security output body filter. Note "in" argument of >>> mod_security ("in=0xbf7ff8b4") and gunzip filter which follows it >>> ("in=0x8baaa6c"). >>> >>> That is, from the backtrace it looks like mod_security changed the >>> buffer chain and did it wrong, with a segfault as a result. >>> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Apr 14 22:24:01 2014 From: nginx-forum at nginx.us (mex) Date: Mon, 14 Apr 2014 18:24:01 -0400 Subject: OT / Re: nginx segfaulting with mod_security In-Reply-To: <534C454E.4070700@fearnothingproductions.net> References: <534C454E.4070700@fearnothingproductions.net> Message-ID: <9b5e761d8803fffe495cfcdc00b89ecd.NginxMailingListEnglish@forum.nginx.org> hi robert, if you dont depend on mod_security's advanced features like output-filtering'n'stuff you might want to try naxsi https://github.com/nbs-system/naxsi/wiki its stable, its fast, rules are easy to create and understand and it provides a set of basic features for a waf. the community is responsive and open for feature-requests or bugreports. regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249248,249294#msg-249294 From rpaprocki at fearnothingproductions.net Mon Apr 14 22:50:56 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 14 Apr 2014 15:50:56 -0700 Subject: OT / Re: nginx segfaulting with mod_security In-Reply-To: <9b5e761d8803fffe495cfcdc00b89ecd.NginxMailingListEnglish@forum.nginx.org> References: <534C454E.4070700@fearnothingproductions.net> <9b5e761d8803fffe495cfcdc00b89ecd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <534C6650.7050501@fearnothingproductions.net> Hi mex! Thanks for the tip! I've known about naxsi for a while. I'm researching various WAF options that will scale well for my Master's thesis, and mod_security + nginx interested me (my other research is pointing towards using Varnish, for which several WAF solutions have already been somewhat implemented). Naxsi doesn't seem to offer the extensive logging and detailed features like state tracking that mod_sec does, so I have been wary to research further into it. But thanks for the suggestion! Sincerely, Robert Paprocki On 04/14/2014 03:24 PM, mex wrote: > hi robert, > > > if you dont depend on mod_security's advanced features like > output-filtering'n'stuff you might want to try naxsi > https://github.com/nbs-system/naxsi/wiki > > its stable, its fast, rules are easy to create and understand and it > provides a set of basic features for a waf. > > the community is responsive and open for feature-requests or bugreports. > > > > regards, > > mex > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249248,249294#msg-249294 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From reallfqq-nginx at yahoo.fr Tue Apr 15 01:33:37 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 15 Apr 2014 03:33:37 +0200 Subject: Nginx reverse proxy on another web server with redirection In-Reply-To: References: Message-ID: Quick sidenote, not scratching the surface of your problem: I would have posted on ServerFault, not StackOverflow --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 15 07:52:04 2014 From: nginx-forum at nginx.us (trapni) Date: Tue, 15 Apr 2014 03:52:04 -0400 Subject: RFC: Feasibility of a "dynamic module loader" built in to nginx? In-Reply-To: <20100827001913.GR99657@mdounin.ru> References: <20100827001913.GR99657@mdounin.ru> Message-ID: <53b48c5caf82f19b108b5d6b932d28bd.NginxMailingListEnglish@forum.nginx.org> Hey, I just found you via Google as I was trying to find out why ..... Nginx is not supporting dynamic loading of modules, for the very obvious issues the other posters seem to have too. However, Maxim, you claim, that one reason is performance. While performance of course matters, I'd like to know why you say so. Can you please go a little bit more into the technical details why a dynamic loaded module does not perform as well as a statically linked "module"? Of course, a compiler could use global optimization techniques to perform interprocedual optimizations, but I don't believe (yet), that the impact shall be that high. 2nd) statically linked libraries speed up process bootup, but this is neglect-able for a long-running process. 3rd) of course, at least for the core modules, they could use #ifdef's inside the request structs and friends (just as you stated) in order to further optimize resource (ie. memory) usage. So, I'm only interested in the performance arguments (not zero-downtime upgrade,...), so I can understand a little better. Many thanks in advance, Christian. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,123868,249297#msg-249297 From mdounin at mdounin.ru Tue Apr 15 11:43:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Apr 2014 15:43:19 +0400 Subject: OpenSSL leaks server-Keys / The Heartbleed Bug In-Reply-To: References: Message-ID: <20140415114319.GF34696@mdounin.ru> Hello! On Mon, Apr 14, 2014 at 03:03:54PM -0400, itpp2012 wrote: > Fyi. if you are running a ssl tunnel like stunnel with openssl 0.9.x, this > attack is logged as "SSL3_GET_RECORD:wrong version number" as opposed to no > nginx/openssl logging. > > If you have logging going back 2 years and you are seeing these log entries > now, you may be able to detect attacks from before 7-4-2014. > > Here we have many stunnels with openssl 0.9.x and found the first attacks > at: 2014.04.08 22:19:14 (CET) in more then 2 years of logging. I suspect that this is just a particular script to exploit the vulnerability, which doesn't care much about being correct and is seen this way due to incorrect handshake. Proper exploitation shouldn't be detectable this way. And yes, it's seen on more or less any 0.9.x OpenSSL installation, including nginx: 2014/04/15 04:02:57 [info] 48738#0: *2785200 SSL_do_handshake() failed (SSL: error:1408F10B:SSL routines:SSL3_GET_RECORD:wrong version number) while SSL handshaking, client: 182.118.48.115, server: 0.0.0.0:443 -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Apr 15 11:51:27 2014 From: nginx-forum at nginx.us (entropie) Date: Tue, 15 Apr 2014 07:51:27 -0400 Subject: simple BREACH workaround for gzip Message-ID: Hello, has anyone considered this simple workaround for BREACH and gzip-compression, i.e. randomly interspersed flush()-es during compression? https://github.com/wnyc/breach_buster It would be compatible with all clients, and should be fairly easy to implement in nginx (for nginx hackers). Of course, it doesn't prevent BREACH attacks, but it makes them much harder. PS: yes, I'm aware that BREACH should also be prevented in the app-layer, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249301,249301#msg-249301 From mdounin at mdounin.ru Tue Apr 15 12:15:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Apr 2014 16:15:12 +0400 Subject: RFC: Feasibility of a "dynamic module loader" built in to nginx? In-Reply-To: <53b48c5caf82f19b108b5d6b932d28bd.NginxMailingListEnglish@forum.nginx.org> References: <20100827001913.GR99657@mdounin.ru> <53b48c5caf82f19b108b5d6b932d28bd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140415121512.GG34696@mdounin.ru> Hello! On Tue, Apr 15, 2014 at 03:52:04AM -0400, trapni wrote: > Hey, I just found you via Google as I was trying to find out why ..... Nginx > is not supporting dynamic loading of modules, for the very obvious issues > the other posters seem to have too. > > However, Maxim, you claim, that one reason is performance. While performance > of course matters, I'd like to know why you say so. > Can you please go a little bit more into the technical details why a dynamic > loaded module does not perform as well as a statically linked "module"? > > Of course, a compiler could use global optimization techniques to perform > interprocedual optimizations, but I don't believe (yet), that the impact > shall be that high. > 2nd) statically linked libraries speed up process bootup, but this is > neglect-able for a long-running process. > 3rd) of course, at least for the core modules, they could use #ifdef's > inside the request structs and friends (just as you stated) in order to > further optimize resource (ie. memory) usage. > > So, I'm only interested in the performance arguments (not zero-downtime > upgrade,...), so I can understand a little better. In addition to the above, calling functions from dynamically linked libraries implies additional indirection, hence it's expected to be slower. I don't think that speed difference is a major problem though, most likely it will be small. It's just one of the reasons in the list. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Apr 15 12:31:42 2014 From: nginx-forum at nginx.us (Nemesiz) Date: Tue, 15 Apr 2014 08:31:42 -0400 Subject: openssl 1.0.1 and tls1.1 and up Message-ID: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Hello I`m struggling with enabling tls1.1 and tls1.2. Some info: NGINX: # nginx -V nginx version: nginx/1.5.13 built by gcc 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu9) TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx/1.5.13 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_spdy_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --add-module=/usr/src/nginx-modules/nginx-openssl-version --add-module=/usr/src/nginx-modules/testcookie-nginx-module --with-pcre=/usr/src/nginx-modules/pcre-8.35 --with-openssl=/usr/src/nginx-modules/openssl-1.0.1g SSL settings: ssl_session_cache shared:SSL:50m; ssl_session_timeout 5m; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_prefer_server_ciphers on; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK'; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;"; https://www.ssllabs.com/ssltest/ results: Protocols TLS 1.2 No TLS 1.1 No TLS 1.0 Yes SSL 3 Yes SSL 2 No Any hint ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249305,249305#msg-249305 From miguelmclara at gmail.com Tue Apr 15 13:31:56 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Tue, 15 Apr 2014 14:31:56 +0100 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have an nginx 1.5 install where I don't set the ssl_protocols, because, the defaults are fine: ---> "Since versions 1.1.13 and 1.0.12, nginx uses ?ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2? by default." This is what I have find to be the best for ciphers, SSLLABS seems to like it, I would even set !RC4, but we need to still support it in this specific server. # ciphers ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; On Tue, Apr 15, 2014 at 1:31 PM, Nemesiz wrote: > Hello > > I`m struggling with enabling tls1.1 and tls1.2. Some info: > > NGINX: > > # nginx -V > nginx version: nginx/1.5.13 > built by gcc 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu9) > TLS SNI support enabled > configure arguments: --prefix=/usr/local/nginx/1.5.13 > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi > --lock-path=/var/lock/nginx.lock > --pid-path=/run/nginx.pid --with-pcre-jit --with-debug > --with-http_addition_module --with-http_auth_request_module > --with-http_dav_module --with-http_geoip_module > --with-http_gzip_static_module --with-http_image_filter_module > --with-http_realip_module --with-http_spdy_module --with-http_ssl_module > --with-http_stub_status_module --with-http_sub_module > --with-http_xslt_module --with-ipv6 > --add-module=/usr/src/nginx-modules/nginx-openssl-version > --add-module=/usr/src/nginx-modules/testcookie-nginx-module > --with-pcre=/usr/src/nginx-modules/pcre-8.35 > --with-openssl=/usr/src/nginx-modules/openssl-1.0.1g > > SSL settings: > > ssl_session_cache shared:SSL:50m; > ssl_session_timeout 5m; > ssl_dhparam /etc/nginx/ssl/dhparam.pem; > ssl_prefer_server_ciphers on; > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers > > 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK'; > add_header Strict-Transport-Security "max-age=31536000; > includeSubdomains;"; > > > https://www.ssllabs.com/ssltest/ results: > > Protocols > TLS 1.2 No > TLS 1.1 No > TLS 1.0 Yes > SSL 3 Yes > SSL 2 No > > Any hint ? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249305,249305#msg-249305 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From miguelmclara at gmail.com Tue Apr 15 13:39:45 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Tue, 15 Apr 2014 14:39:45 +0100 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: I should clarify the the default for ssl_protocols is fine, to my environment since we need to support SSLv3, if you don't I suggest make it safer: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; On Tue, Apr 15, 2014 at 2:31 PM, Miguel Clara wrote: > > I have an nginx 1.5 install where I don't set the ssl_protocols, because, > the defaults are fine: > ---> "Since versions 1.1.13 and 1.0.12, nginx uses ?ssl_protocols SSLv3 > TLSv1 TLSv1.1 TLSv1.2? by default." > > > This is what I have find to be the best for ciphers, SSLLABS seems to like > it, I would even set !RC4, but we need to still support it in this specific > server. > > > # ciphers > ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM > EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 > EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK > !SRP !DSS"; > > > > > > > > On Tue, Apr 15, 2014 at 1:31 PM, Nemesiz wrote: > >> Hello >> >> I`m struggling with enabling tls1.1 and tls1.2. Some info: >> >> NGINX: >> >> # nginx -V >> nginx version: nginx/1.5.13 >> built by gcc 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu9) >> TLS SNI support enabled >> configure arguments: --prefix=/usr/local/nginx/1.5.13 >> --conf-path=/etc/nginx/nginx.conf >> --error-log-path=/var/log/nginx/error.log >> --http-client-body-temp-path=/var/lib/nginx/body >> --http-fastcgi-temp-path=/var/lib/nginx/fastcgi >> --http-log-path=/var/log/nginx/access.log >> --http-proxy-temp-path=/var/lib/nginx/proxy >> --http-scgi-temp-path=/var/lib/nginx/scgi >> --http-uwsgi-temp-path=/var/lib/nginx/uwsgi >> --lock-path=/var/lock/nginx.lock >> --pid-path=/run/nginx.pid --with-pcre-jit --with-debug >> --with-http_addition_module --with-http_auth_request_module >> --with-http_dav_module --with-http_geoip_module >> --with-http_gzip_static_module --with-http_image_filter_module >> --with-http_realip_module --with-http_spdy_module --with-http_ssl_module >> --with-http_stub_status_module --with-http_sub_module >> --with-http_xslt_module --with-ipv6 >> --add-module=/usr/src/nginx-modules/nginx-openssl-version >> --add-module=/usr/src/nginx-modules/testcookie-nginx-module >> --with-pcre=/usr/src/nginx-modules/pcre-8.35 >> --with-openssl=/usr/src/nginx-modules/openssl-1.0.1g >> >> SSL settings: >> >> ssl_session_cache shared:SSL:50m; >> ssl_session_timeout 5m; >> ssl_dhparam /etc/nginx/ssl/dhparam.pem; >> ssl_prefer_server_ciphers on; >> ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; >> ssl_ciphers >> >> 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK'; >> add_header Strict-Transport-Security "max-age=31536000; >> includeSubdomains;"; >> >> >> https://www.ssllabs.com/ssltest/ results: >> >> Protocols >> TLS 1.2 No >> TLS 1.1 No >> TLS 1.0 Yes >> SSL 3 Yes >> SSL 2 No >> >> Any hint ? >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,249305,249305#msg-249305 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 15 16:50:35 2014 From: nginx-forum at nginx.us (nanne) Date: Tue, 15 Apr 2014 12:50:35 -0400 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <3047263.xWiBAFAj8A@vbart-laptop> References: <3047263.xWiBAFAj8A@vbart-laptop> Message-ID: <909578cb5fb5d6a2f65fa0e7580df080.NginxMailingListEnglish@forum.nginx.org> This is somewhat of an old one, but has there been any change on account of this possible bug? We have currently shut down SPDY to avoid this, but at the cost of... well.. shutting down SPDY ;) I have made a serverfault-topic about this by the way but currently the only things in there are my detailed problem/question, and my self-answer that refers to this thread. Nevertheless, for completeness the link: http://serverfault.com/q/523340/77501 Nanne. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,249311#msg-249311 From nginx-forum at nginx.us Tue Apr 15 17:09:51 2014 From: nginx-forum at nginx.us (sim4life) Date: Tue, 15 Apr 2014 13:09:51 -0400 Subject: NginX reverse proxy with iRedMail Apache2 Message-ID: <1fd8588a24a56d351f8b593a862ca68b.NginxMailingListEnglish@forum.nginx.org> On an empty VPS hosting (Ubuntu 13.10 x64), I managed to run the base iRedMail installation with Apache2 and LDAP and my roundcubemail was accessible at: `https://www.mydomain.com/mail` then I installed NginX, shutdown Apache2, reconfigured iRedMail (without adding any extra A record in the DNS entry) and managed to run it on NginX base installation as well with roundcubemail accessible at: `https://mail.mydomain.com` Now, I want to run NginX reverse proxy with the base iRedMail Apache2 installation with roundcubemail accessible at: `https://mail.mydomain.com` and I'm kinda stuck with the following Apache2 config files: [quote]/etc/apache2/ports.conf[/quote] Listen 8080 [quote]/etc/apahce2/sites-available/my-iredmail.conf[/quote] DocumentRoot /var/www/ ServerName mail.mydomain.com Alias / "/usr/share/apache2/roundcubemail/" Options Indexes FollowSymlinks MultiViews AllowOverride All Order allow,deny Allow from all and following NginX config file: [quote]/etc/nginx/sites-available/default[/quote] server { listen 80 default_server; listen [::]:80; root /usr/share/nginx/html; index index.html index.htm index.php; server_name mydomain.com www.mydomain.com mail.mydomain.com; location / { try_files $uri $uri/ /index.html; } location ~ \.php$ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $remote_addr; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080/; } location ~ /\.ht { deny all; } } server { listen 443 ssl; root /var/www; index index.html index.htm index.php; server_name mydomain.com www.mydomain.com mail.mydomain.com; ssl on; ssl_certificate /etc/ssl/certs/iRedMail_CA.pem; ssl_certificate_key /etc/ssl/private/iRedMail.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; ssl_prefer_server_ciphers on; location / { # Apache is listening here proxy_pass http://127.0.0.1:8080/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Hitting in browser: `https://mail.mydomain.com` gives the usual `SSL Connection Error`. Kindly advise. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249312,249312#msg-249312 From vbart at nginx.com Tue Apr 15 17:13:09 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Apr 2014 21:13:09 +0400 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <909578cb5fb5d6a2f65fa0e7580df080.NginxMailingListEnglish@forum.nginx.org> References: <3047263.xWiBAFAj8A@vbart-laptop> <909578cb5fb5d6a2f65fa0e7580df080.NginxMailingListEnglish@forum.nginx.org> Message-ID: <17472697.VfbGJF4p94@vbart-laptop> On Tuesday 15 April 2014 12:50:35 nanne wrote: > This is somewhat of an old one, but has there been any change on account of > this possible bug? We have currently shut down SPDY to avoid this, but at > the cost of... well.. shutting down SPDY ;) > > I have made a serverfault-topic about this by the way but currently the only > things in there are my detailed problem/question, and my self-answer that > refers to this thread. Nevertheless, for completeness the link: > http://serverfault.com/q/523340/77501 > As for the bug with deferred responding to PING, that could cause such problems, then it was fixed around 1.5.9. But since still nobody has provided a debug log, I cannot say for certain that it's the same bug that all the people in this topic have experienced. I'd recommend to you install the latest version and check if your problem solved or not. In the latter case, please, provide a debug log. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Apr 15 18:04:02 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 15 Apr 2014 14:04:02 -0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: hi, what is your os (name and version)? where do you have the ciphers from bwt? i'd suggest you test the tls-version yourself with testssl.sh https://bitbucket.org/nginx-goodies/testssl.sh (note: you need a current openssl-version on the machine you test from) regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249305,249315#msg-249315 From nginx-forum at nginx.us Tue Apr 15 18:37:17 2014 From: nginx-forum at nginx.us (nxspeed) Date: Tue, 15 Apr 2014 14:37:17 -0400 Subject: RFC: Feasibility of a "dynamic module loader" built in to nginx? In-Reply-To: <4C7AFBEF.3040200@gmail.com> References: <4C7AFBEF.3040200@gmail.com> Message-ID: Have you looked @ http://tengine.taobao.org/document/dso.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,123868,249317#msg-249317 From vbart at nginx.com Tue Apr 15 19:10:48 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 15 Apr 2014 23:10:48 +0400 Subject: RFC: Feasibility of a "dynamic module loader" built in to nginx? In-Reply-To: References: <4C7AFBEF.3040200@gmail.com> Message-ID: <2675652.Ht02sLN2fj@vbart-laptop> On Tuesday 15 April 2014 14:37:17 nxspeed wrote: > Have you looked @ http://tengine.taobao.org/document/dso.html > http://mailman.nginx.org/pipermail/nginx/2012-September/035405.html wbr, Valentin V. Bartenev From aflexzor at gmail.com Tue Apr 15 22:54:18 2014 From: aflexzor at gmail.com (Alex Findale) Date: Tue, 15 Apr 2014 16:54:18 -0600 Subject: 499 in a proxy environment and short execution php scripts Message-ID: <534DB89A.5040908@gmail.com> Hello Nginx Before writing this I have checked the meaning of the 499 error code as result of that the client closed the connection before the server answered the request. Thats clear as water. PROBLEM: Iam measuring the execution time of the requests, and I see that in general for all sites clients are generating 499 errors with different php scripts whose execution is under 1 second. QUESTIONS: a.) I would totally understand this behaviour in scripts that take more than a few seconds but why would i see so many 499s for quick scripts? b.) Would it be possible that my proxy setup would be causing premature 499 as explained? I have : client_body_timeout 10; client_header_timeout 10; keepalive_timeout 15; send_timeout 10; proxy_connect_timeout 15; proxy_send_timeout 60; proxy_read_timeout 60; Regards, Alex F From nginx-forum at nginx.us Wed Apr 16 08:51:04 2014 From: nginx-forum at nginx.us (sai1511) Date: Wed, 16 Apr 2014 04:51:04 -0400 Subject: Nginx does not support Forward SSL proxy connection Message-ID: <7f5585a519666b3ffb6105ec28d6faf2.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to setup Forward SSL Proxy through nginx. However I came across this post,http://forum.nginx.org/read.php?2,15124,15256#msg-15256. Is this still not supported or you just don't have this on your list ? Thank You Sai Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249323,249323#msg-249323 From nginx-forum at nginx.us Wed Apr 16 08:58:17 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 16 Apr 2014 04:58:17 -0400 Subject: Requests being blocked client-side In-Reply-To: <2836cfe5738681164ed4d1ff7a42533f.NginxMailingListEnglish@forum.nginx.org> References: <2836cfe5738681164ed4d1ff7a42533f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maxim. Even after disabling SPDY and restarting nginx, still seeing the same behavior with requests blocking if another single request is outstanding in another tab. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249233,249324#msg-249324 From lee at leev.net Wed Apr 16 09:01:54 2014 From: lee at leev.net (Lee Valentine) Date: Wed, 16 Apr 2014 10:01:54 +0100 Subject: Requests being blocked client-side In-Reply-To: References: <2836cfe5738681164ed4d1ff7a42533f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi On 16 April 2014 09:58, justink101 wrote: > Maxim. > > Even after disabling SPDY and restarting nginx, still seeing the same > behavior with requests blocking if another single request is outstanding in > another tab. Are you using php by any chance? I had a problem showing these exact same symptoms last week. By default, php writes to files for sessions. When a session is started, the file is locked by the current process and is only released at the end of the request or if session_write_close() is called. This will cause any requests in the same session to hang until the first completes. The process in this instance that is blocking is php and not nginx. We got around this by storing the session in Redis instead of files. Cheers, Lee -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 16 09:41:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Apr 2014 13:41:34 +0400 Subject: Nginx does not support Forward SSL proxy connection In-Reply-To: <7f5585a519666b3ffb6105ec28d6faf2.NginxMailingListEnglish@forum.nginx.org> References: <7f5585a519666b3ffb6105ec28d6faf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140416094134.GV34696@mdounin.ru> Hello! On Wed, Apr 16, 2014 at 04:51:04AM -0400, sai1511 wrote: > I'm trying to setup Forward SSL Proxy through nginx. However I came across > this post,http://forum.nginx.org/read.php?2,15124,15256#msg-15256. Is this > still not supported or you just don't have this on your list ? No changes since then, nginx isn't a forward proxy, and there are no plans to turn it into a forward proxy. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Apr 16 10:23:13 2014 From: nginx-forum at nginx.us (nanne) Date: Wed, 16 Apr 2014 06:23:13 -0400 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <17472697.VfbGJF4p94@vbart-laptop> References: <17472697.VfbGJF4p94@vbart-laptop> Message-ID: <965808059b9a5f979d600284cc6968ca.NginxMailingListEnglish@forum.nginx.org> Thanks for the update on this! I'll check it out, though I believe we will have to keep the current 1.4.7 stable for now. Sadly I am not able to provide a debug log on the production machine. I can provide one from a test-environment of course, but it seems beyond me to recreate the issues there :(. I'll keep trying, and if we can recreate them I'll post a debuglog, then update to the preview branch, and try again to see what has happened. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,249328#msg-249328 From nginx-forum at nginx.us Wed Apr 16 10:35:43 2014 From: nginx-forum at nginx.us (Nemesiz) Date: Wed, 16 Apr 2014 06:35:43 -0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Strange things are happening. nginx: ssl_protocols TLSv1 TLSv1.1 TLSv1.2; Results: ssllabs.com: TLS 1.2 No TLS 1.1 No TLS 1.0 Yes SSL 3 Yes SSL 2 No testssl.sh: SSLv2 NOT offered (ok) SSLv3 offered TLSv1 offered (ok) TLSv1.1 not offered TLSv1.2 not offered Looks like i can`t disable sslv3 OS: Ubuntu sancy SSL Certificate: StartCom Ltd. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249305,249329#msg-249329 From mat999 at gmail.com Wed Apr 16 10:38:10 2014 From: mat999 at gmail.com (SplitIce) Date: Wed, 16 Apr 2014 20:38:10 +1000 Subject: server_names scaling Message-ID: Hi all, I have spent the day troubleshooting why one server in our network reloaded / tested configuration extremely slowly. We have found that server_names scales very poorly, once a certain point is reached (approx 5.5k entries globally, 5k entries for a single host) performance drops from a <0.5s reload time to 15s+. The large host of ~5,000 entries is a malware domain zone and all server names in this zone are using the wildcard name format. For now we have resolved this issue by fixing an inefficiency in our configuration (namely using *.domain.com and domain.com) however I feel this is most likely a bug or at-least an unintended behaviour. Relevant configuration entries: server_names_hash_max_size 8000; server_names_hash_bucket_size 128; Regards, Mathew -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Apr 16 10:41:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Apr 2014 14:41:21 +0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140416104121.GZ34696@mdounin.ru> Hello! On Wed, Apr 16, 2014 at 06:35:43AM -0400, Nemesiz wrote: > Strange things are happening. > > nginx: > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > > Results: > > ssllabs.com: > TLS 1.2 No > TLS 1.1 No > TLS 1.0 Yes > SSL 3 Yes > SSL 2 No > > testssl.sh: > > SSLv2 NOT offered (ok) > SSLv3 offered > TLSv1 offered (ok) > TLSv1.1 not offered > TLSv1.2 not offered > > Looks like i can`t disable sslv3 It looks like you are testing something different, not nginx you are trying to configure. Check what is actually listening on the ip:port you are testing. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 16 10:59:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Apr 2014 14:59:27 +0400 Subject: server_names scaling In-Reply-To: References: Message-ID: <20140416105927.GA34696@mdounin.ru> Hello! On Wed, Apr 16, 2014 at 08:38:10PM +1000, SplitIce wrote: > Hi all, > > I have spent the day troubleshooting why one server in our network reloaded > / tested configuration extremely slowly. > > We have found that server_names scales very poorly, once a certain point is > reached (approx 5.5k entries globally, 5k entries for a single host) > performance drops from a <0.5s reload time to 15s+. > > The large host of ~5,000 entries is a malware domain zone and all server > names in this zone are using the wildcard name format. > > For now we have resolved this issue by fixing an inefficiency in our > configuration (namely using *.domain.com and domain.com) however I feel > this is most likely a bug or at-least an unintended behaviour. > > Relevant configuration entries: > server_names_hash_max_size 8000; > server_names_hash_bucket_size 128; With max_size 8000, and 5k entries - probability of collisions while building a cache is high (think of birthday paradox). And bucket_size 128 isn't high enough to allow multiple collisions. As a result, nginx may (and likely will) spend a lot of time trying to build an optimal hash. Trivial solution is to use higher max_size and/or bucket_size. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Apr 16 11:03:31 2014 From: nginx-forum at nginx.us (Nemesiz) Date: Wed, 16 Apr 2014 07:03:31 -0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: <20140416104121.GZ34696@mdounin.ru> References: <20140416104121.GZ34696@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > It looks like you are testing something different, not nginx you > are trying to configure. Check what is actually listening on the > ip:port you are testing. testssl.sh: --> Testing HTTP Header response HSTS 365 days (31536000 s) Server nginx/1.5.13 Application (None) ssllabs.com: HTTP server signature nginx/1.5.13 # netstat -tulnp Active Internet connections (only servers) Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name tcp 0 0 127.0.0.1:6379 0.0.0.0:* LISTEN 17535/redis-server tcp 0 0 0.0.0.0:1003 0.0.0.0:* LISTEN 19379/sshd tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 10632/nginx tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 17584/unicorn.rb -E tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 19379/sshd tcp 0 0 0.0.0.0:25 0.0.0.0:* LISTEN 733/exim4 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 10632/nginx tcp6 0 0 :::1003 :::* LISTEN 19379/sshd tcp6 0 0 :::22 :::* LISTEN 19379/sshd tcp6 0 0 :::25 :::* LISTEN 733/exim4 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249305,249333#msg-249333 From mat999 at gmail.com Wed Apr 16 11:16:20 2014 From: mat999 at gmail.com (SplitIce) Date: Wed, 16 Apr 2014 21:16:20 +1000 Subject: server_names scaling In-Reply-To: <20140416105927.GA34696@mdounin.ru> References: <20140416105927.GA34696@mdounin.ru> Message-ID: Thank you, that makes sense and a bit of testing reveals that is correct. On Wed, Apr 16, 2014 at 8:59 PM, Maxim Dounin wrote: > Hello! > > On Wed, Apr 16, 2014 at 08:38:10PM +1000, SplitIce wrote: > > > Hi all, > > > > I have spent the day troubleshooting why one server in our network > reloaded > > / tested configuration extremely slowly. > > > > We have found that server_names scales very poorly, once a certain point > is > > reached (approx 5.5k entries globally, 5k entries for a single host) > > performance drops from a <0.5s reload time to 15s+. > > > > The large host of ~5,000 entries is a malware domain zone and all server > > names in this zone are using the wildcard name format. > > > > For now we have resolved this issue by fixing an inefficiency in our > > configuration (namely using *.domain.com and domain.com) however I feel > > this is most likely a bug or at-least an unintended behaviour. > > > > Relevant configuration entries: > > server_names_hash_max_size 8000; > > server_names_hash_bucket_size 128; > > With max_size 8000, and 5k entries - probability of collisions > while building a cache is high (think of birthday paradox). And > bucket_size 128 isn't high enough to allow multiple collisions. > As a result, nginx may (and likely will) spend a lot of time trying > to build an optimal hash. > > Trivial solution is to use higher max_size and/or bucket_size. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed Apr 16 11:18:32 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 16 Apr 2014 15:18:32 +0400 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <965808059b9a5f979d600284cc6968ca.NginxMailingListEnglish@forum.nginx.org> References: <17472697.VfbGJF4p94@vbart-laptop> <965808059b9a5f979d600284cc6968ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <11184074.2Ym87tqr3L@vbart-laptop> On Wednesday 16 April 2014 06:23:13 nanne wrote: > Thanks for the update on this! I'll check it out, though I believe we will > have to keep the current 1.4.7 stable for now. > > Sadly I am not able to provide a debug log on the production machine. I can > provide one from a test-environment of course, but it seems beyond me to > recreate the issues there :(. > > I'll keep trying, and if we can recreate them I'll post a debuglog, then > update to the preview branch, and try again to see what has happened. > There is a contradiction between using an innovative experimental, actively developed protocol and old versions of software. I don't recommend "stable" branch with spdy since it receives only critical bugs (like security related and some segfaults). A lot of bugs in the spdy module already have been fixed in 1.5.x. Also note, that number of clients with spdy/2 support rapidly decreases. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Apr 16 12:01:55 2014 From: nginx-forum at nginx.us (nanne) Date: Wed, 16 Apr 2014 08:01:55 -0400 Subject: bug in spdy - 499 response code on long running requests In-Reply-To: <11184074.2Ym87tqr3L@vbart-laptop> References: <11184074.2Ym87tqr3L@vbart-laptop> Message-ID: agree Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240278,249336#msg-249336 From reallfqq-nginx at yahoo.fr Wed Apr 16 12:08:20 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 16 Apr 2014 14:08:20 +0200 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: References: <20140416104121.GZ34696@mdounin.ru> Message-ID: Rather than posting raw outputs, try to understand the piece orf advice Maxim gave to you. I suspect those SSL-validation websites test websites... which correspond to a certain standard port. I see a problem, don't you ? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From kyprizel at gmail.com Wed Apr 16 12:18:47 2014 From: kyprizel at gmail.com (kyprizel) Date: Wed, 16 Apr 2014 16:18:47 +0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: References: <20140416104121.GZ34696@mdounin.ru> Message-ID: I think the problem is your nginx uses libssl version from your OS (0.9.8/1.0.0). On Wed, Apr 16, 2014 at 4:08 PM, B.R. wrote: > Rather than posting raw outputs, try to understand the piece orf advice > Maxim gave to you. > > I suspect those SSL-validation websites test websites... which correspond > to a certain standard port. > I see a problem, don't you ? > --- > *B. R.* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 16 13:13:07 2014 From: nginx-forum at nginx.us (Nemesiz) Date: Wed, 16 Apr 2014 09:13:07 -0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <836f9ce95b0872fd2924ae5876485099.NginxMailingListEnglish@forum.nginx.org> I recompiled with default openssl lib (1.0.1e-3ubuntu1.2) Default install path: # nginx -V nginx version: nginx/1.5.13 built by gcc 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu9) TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx/1.5.13 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_ssl_module --add-module=/usr/src/nginx-modules/nginx-openssl-version --with-pcre=/usr/src/nginx-modules/pcre-8.35 nginx clone to /root/test # ./nginx -V nginx version: nginx/1.5.13 built by gcc 4.8.1 (Ubuntu/Linaro 4.8.1-10ubuntu9) TLS SNI support enabled configure arguments: --prefix=/usr/local/nginx/1.5.13 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --with-pcre-jit --with-debug --with-http_ssl_module --add-module=/usr/src/nginx-modules/nginx-openssl-version --with-pcre=/usr/src/nginx-modules/pcre-8.35 The same settings but default nginx runs on 80 and 443 port. Cloned nginx runs on 81 nad 443 default nginx on port 443: --> Testing Protocols SSLv2 NOT offered (ok) SSLv3 offered TLSv1 offered (ok) TLSv1.1 not offered TLSv1.2 not offered SPDY/NPN http/1.1 (advertised) cloned nginx on port 444: --> Testing Protocols SSLv2 NOT offered (ok) SSLv3 NOT offered (ok) TLSv1 offered (ok) TLSv1.1 offered (ok) TLSv1.2 offered (ok) # ldd /usr/local/nginx/1.5.13/sbin/nginx linux-vdso.so.1 => (0x00007fff623fe000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6e46143000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f6e45f0a000) libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f6e45cab000) libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f6e458cf000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f6e456b6000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6e452ed000) /lib64/ld-linux-x86-64.so.2 (0x00007f6e4636c000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6e450e9000) # ldd /root/test/nginx linux-vdso.so.1 => (0x00007fffe478f000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f6dcdfc5000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f6dcdd8c000) libssl.so.1.0.0 => /lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f6dcdb2d000) libcrypto.so.1.0.0 => /lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f6dcd751000) libz.so.1 => /lib/x86_64-linux-gnu/libz.so.1 (0x00007f6dcd538000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007f6dcd16f000) /lib64/ld-linux-x86-64.so.2 (0x00007f6dce1ee000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f6dccf6b000) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249305,249339#msg-249339 From vbart at nginx.com Wed Apr 16 13:35:53 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 16 Apr 2014 17:35:53 +0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4225421.zKesRFoFAh@vbart-laptop> Check that you have run the same nginx, that you are trying to configure. $ ps -fC nginx wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Apr 16 13:42:27 2014 From: nginx-forum at nginx.us (Nemesiz) Date: Wed, 16 Apr 2014 09:42:27 -0400 Subject: openssl 1.0.1 and tls1.1 and up In-Reply-To: <836f9ce95b0872fd2924ae5876485099.NginxMailingListEnglish@forum.nginx.org> References: <0c02a2cfb703216656bac7d2eecadc8e.NginxMailingListEnglish@forum.nginx.org> <836f9ce95b0872fd2924ae5876485099.NginxMailingListEnglish@forum.nginx.org> Message-ID: I found where the problems was. I thought ssl options can be different in virtual host. Default server settings was not overwritten. server { include conf/default-settings; root /var/www; server_name ""; ssl on; ssl_certificate ssl/nmz_ssl.crt; ssl_certificate_key ssl/nmz_ssl.key; ssl_session_timeout 5m; ssl_protocols SSLv3 TLSv1; ssl_ciphers ALL:!ADH:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv3:+EXP; ssl_prefer_server_ciphers on; location / { try_files $uri $uri/ =404; } location /smokeping/ { proxy_pass http://10.10.10.2/smokeping/; } } Others servers: server { include conf/default-site-ssl; include conf/default-settings; ssl_certificate /etc/nginx/ssl/host.pem; ssl_certificate_key /etc/nginx/ssl/host.key; .... conf/default-site-ssl : listen 443 ssl; ssl on; ssl_session_cache shared:SSL:50m; ssl_session_timeout 5m; ssl_dhparam /etc/nginx/ssl/dhparam.pem; ssl_prefer_server_ciphers on; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:HIGH:!aNULL:!eNULL:!EXPORT:!DES:!3DES:!MD5:!PSK'; add_header Strict-Transport-Security "max-age=31536000; includeSubdomains;"; nginx -t did not show any error. http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_protocols So some ssl options cannot be overwritten ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249305,249341#msg-249341 From yaoweibin at gmail.com Wed Apr 16 13:44:04 2014 From: yaoweibin at gmail.com (Weibin Yao) Date: Wed, 16 Apr 2014 21:44:04 +0800 Subject: Nginx does not support Forward SSL proxy connection In-Reply-To: <20140416094134.GV34696@mdounin.ru> References: <7f5585a519666b3ffb6105ec28d6faf2.NginxMailingListEnglish@forum.nginx.org> <20140416094134.GV34696@mdounin.ru> Message-ID: Tengine are ready to add the support of forward proxy, you can have a look at this pull request: https://github.com/alibaba/tengine/pull/335 Thanks. 2014-04-16 17:41 GMT+08:00 Maxim Dounin : > Hello! > > On Wed, Apr 16, 2014 at 04:51:04AM -0400, sai1511 wrote: > >> I'm trying to setup Forward SSL Proxy through nginx. However I came across >> this post,http://forum.nginx.org/read.php?2,15124,15256#msg-15256. Is this >> still not supported or you just don't have this on your list ? > > No changes since then, nginx isn't a forward proxy, and there are > no plans to turn it into a forward proxy. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao From anoopalias01 at gmail.com Wed Apr 16 14:01:59 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 16 Apr 2014 19:31:59 +0530 Subject: mediawiki fastcgi_cache not working Message-ID: Hi, I have setup nginx with ngx_cache_purge to work with mediawiki nginx.conf ### fastcgi_cache_path /etc/nginx/cache levels=1:2 keys_zone=MYAPP:100m inactive=60m; fastcgi_cache_key "$scheme$host$request_uri"; ### vhost.conf ### set $no_cache ""; if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } if ($request_method ~ "PURGE") { rewrite (.*) /PURGE$1 last; } location / { if (!-e $request_filename) { rewrite ^/([^?]*)(?:\?(.*))? /index.php?title=$1&$2 last; } if ($uri ~* "\.(ico|css|js|gif|jpe?g|png)(\?[0-9]+)?$") { expires max; break; } } location /PURGE/ { allow 127.0.0.1; fastcgi_cache_purge MYAPP; } location ~ \.php$ { fastcgi_cache MYAPP; fastcgi_cache_methods GET HEAD; fastcgi_cache_valid 200 5m; add_header X-Cache "$upstream_cache_status"; fastcgi_cache_bypass $no_cache $http_cookie; fastcgi_no_cache $no_cache $http_cookie; fastcgi_ignore_headers Expires Cache-Control; } #### The caching is working fine as .But whenever I edit a page I can see the following in the access log 127.0.0.1 - - [16/Apr/2014:09:37:59 -0400] "PURGE /Knowledge_Base HTTP/1.1" 405 166 "-" "MediaWiki/1.21.3 SquidPurgeClient" - "-" 0 Regardless the page is never purged and I see the cached page unless nginx is restarted . What am I doing wrong here . Why is the PURGE request generating a 405 response? Thank you, -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 16 14:30:24 2014 From: nginx-forum at nginx.us (Shrirang) Date: Wed, 16 Apr 2014 10:30:24 -0400 Subject: ngx http limit req : burst=0 cannot support more than 1000 RPS Message-ID: I am using "ngx_http_limit_req" module. After going through the code I see that if "burst = 0" (i.e. not specified) then maximum rate limiting that can be offered is 1000 RPS only. I have seen this in my stress test too. I didn't see this in any documentation. Want to clarify if this is really true or I missed something? --- Reasoning : 1. Module expects uniform request arrival. Any request that breaks the uniformity, will get rate limited. 2. Configured rate = 1000r/s. This means 1 request-per-ms. 3. If more than 1 request is received in same "ms" time, then only 1 request is served. All other requests are rate limited. 4. For above configuration, if "burst=X" is configured, then "X" number of requests are served while requests more than that will get rate limited. 5. If Configured rate = 2000r/s (i.e. 2 request-per-ms). If "burst=0" (i.e. not specified), then too it supports only 1-request-per-ms rate. If 3 requests are received in same "ms" then only 1 is served and other 2 are rate limited. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249344,249344#msg-249344 From mdounin at mdounin.ru Wed Apr 16 14:31:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Apr 2014 18:31:59 +0400 Subject: Nginx does not support Forward SSL proxy connection In-Reply-To: References: <7f5585a519666b3ffb6105ec28d6faf2.NginxMailingListEnglish@forum.nginx.org> <20140416094134.GV34696@mdounin.ru> Message-ID: <20140416143159.GC34696@mdounin.ru> Hello! On Wed, Apr 16, 2014 at 09:44:04PM +0800, Weibin Yao wrote: > Tengine are ready to add the support of forward proxy, you can have a > look at this pull request: https://github.com/alibaba/tengine/pull/335 Just a note: support for CONNECT method != forward proxy. It doesn't make other parts of code to behave like they should in case of forward proxy. E.g., it won't stop proxy module from handling X-Accel-* headers by default, and so on. > > Thanks. > > 2014-04-16 17:41 GMT+08:00 Maxim Dounin : > > Hello! > > > > On Wed, Apr 16, 2014 at 04:51:04AM -0400, sai1511 wrote: > > > >> I'm trying to setup Forward SSL Proxy through nginx. However I came across > >> this post,http://forum.nginx.org/read.php?2,15124,15256#msg-15256. Is this > >> still not supported or you just don't have this on your list ? > > > > No changes since then, nginx isn't a forward proxy, and there are > > no plans to turn it into a forward proxy. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Weibin Yao > Developer @ Server Platform Team of Taobao > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 16 14:40:13 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Apr 2014 18:40:13 +0400 Subject: ngx http limit req : burst=0 cannot support more than 1000 RPS In-Reply-To: References: Message-ID: <20140416144013.GD34696@mdounin.ru> Hello! On Wed, Apr 16, 2014 at 10:30:24AM -0400, Shrirang wrote: > I am using "ngx_http_limit_req" module. After going through the code I see > that if "burst = 0" (i.e. not specified) then maximum rate limiting that can > be offered is 1000 RPS only. I have seen this in my stress test too. > I didn't see this in any documentation. Want to clarify if this is really > true or I missed something? > > --- > Reasoning : > 1. Module expects uniform request arrival. Any request that breaks the > uniformity, will get rate limited. > 2. Configured rate = 1000r/s. This means 1 request-per-ms. > 3. If more than 1 request is received in same "ms" time, then only 1 request > is served. All other requests are rate limited. > 4. For above configuration, if "burst=X" is configured, then "X" number of > requests are served while requests more than that will get rate limited. > 5. If Configured rate = 2000r/s (i.e. 2 request-per-ms). If "burst=0" (i.e. > not specified), then too it supports only 1-request-per-ms rate. If 3 > requests are received in same "ms" then only 1 is served and other 2 are > rate limited. Yes, the module uses time with millisecond resolution, and hence rates higher than 1 request per millisecond won't work without burst specified. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Apr 16 14:43:43 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 16 Apr 2014 18:43:43 +0400 Subject: ngx http limit req : burst=0 cannot support more than 1000 RPS In-Reply-To: References: Message-ID: <465889372.letkK8Fm42@vbart-laptop> On Wednesday 16 April 2014 10:30:24 Shrirang wrote: > I am using "ngx_http_limit_req" module. After going through the code I see > that if "burst = 0" (i.e. not specified) then maximum rate limiting that can > be offered is 1000 RPS only. I have seen this in my stress test too. > I didn't see this in any documentation. Want to clarify if this is really > true or I missed something? > [..] The real limitation can be even lower. It's actually limited by nginx timer granularity which under big load can be even worst than 1 ms. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Apr 16 14:46:46 2014 From: nginx-forum at nginx.us (Shrirang) Date: Wed, 16 Apr 2014 10:46:46 -0400 Subject: ngx http limit req : burst=0 cannot support more than 1000 RPS In-Reply-To: <465889372.letkK8Fm42@vbart-laptop> References: <465889372.letkK8Fm42@vbart-laptop> Message-ID: <224638180b5e31861355a6ecc23f61b4.NginxMailingListEnglish@forum.nginx.org> Thanks a lot for quick confirmation Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249347,249348#msg-249348 From nginx-forum at nginx.us Wed Apr 16 14:47:09 2014 From: nginx-forum at nginx.us (Shrirang) Date: Wed, 16 Apr 2014 10:47:09 -0400 Subject: ngx http limit req : burst=0 cannot support more than 1000 RPS In-Reply-To: <20140416144013.GD34696@mdounin.ru> References: <20140416144013.GD34696@mdounin.ru> Message-ID: <89c75fe6e00aecff450e8c1ec6511bf1.NginxMailingListEnglish@forum.nginx.org> Thanks for quick confirmation Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249346,249349#msg-249349 From yaoweibin at gmail.com Wed Apr 16 15:12:59 2014 From: yaoweibin at gmail.com (Weibin Yao) Date: Wed, 16 Apr 2014 23:12:59 +0800 Subject: Nginx does not support Forward SSL proxy connection In-Reply-To: <20140416143159.GC34696@mdounin.ru> References: <7f5585a519666b3ffb6105ec28d6faf2.NginxMailingListEnglish@forum.nginx.org> <20140416094134.GV34696@mdounin.ru> <20140416143159.GC34696@mdounin.ru> Message-ID: OK, Thanks for your comment. We will fix that and welcome any suggestion. Nginx is almost a forward proxy, it's useful for some proxy users. Thank you. 2014-04-16 22:31 GMT+08:00 Maxim Dounin : > Hello! > > On Wed, Apr 16, 2014 at 09:44:04PM +0800, Weibin Yao wrote: > >> Tengine are ready to add the support of forward proxy, you can have a >> look at this pull request: https://github.com/alibaba/tengine/pull/335 > > Just a note: support for CONNECT method != forward proxy. It > doesn't make other parts of code to behave like they should in > case of forward proxy. E.g., it won't stop proxy module from handling > X-Accel-* headers by default, and so on. > >> >> Thanks. >> >> 2014-04-16 17:41 GMT+08:00 Maxim Dounin : >> > Hello! >> > >> > On Wed, Apr 16, 2014 at 04:51:04AM -0400, sai1511 wrote: >> > >> >> I'm trying to setup Forward SSL Proxy through nginx. However I came across >> >> this post,http://forum.nginx.org/read.php?2,15124,15256#msg-15256. Is this >> >> still not supported or you just don't have this on your list ? >> > >> > No changes since then, nginx isn't a forward proxy, and there are >> > no plans to turn it into a forward proxy. >> > >> > -- >> > Maxim Dounin >> > http://nginx.org/ >> > >> > _______________________________________________ >> > nginx mailing list >> > nginx at nginx.org >> > http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> -- >> Weibin Yao >> Developer @ Server Platform Team of Taobao >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Weibin Yao Developer @ Server Platform Team of Taobao From nginx-forum at nginx.us Wed Apr 16 22:58:36 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 16 Apr 2014 18:58:36 -0400 Subject: Requests being blocked client-side In-Reply-To: References: Message-ID: <3bbd43c3d9b0ab24becd118a76fc7eda.NginxMailingListEnglish@forum.nginx.org> Hi Lee. Yes using PHP. Could we simply just call session_write_close() immediately after we open and verify the session details? I'd like to avoid adding another piece of infrastructure (redis) on every web server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249233,249352#msg-249352 From nginx-forum at nginx.us Thu Apr 17 02:24:15 2014 From: nginx-forum at nginx.us (justink101) Date: Wed, 16 Apr 2014 22:24:15 -0400 Subject: Requests being blocked client-side In-Reply-To: References: Message-ID: Lee we switched to using memcached for sessions and this helped, but still seeing blocking, though less time. If we open two tabs, in the first page fire an ajax request that takes 20+ seconds to run, then in the second tab refresh, the page blocks loading in the second tab, but now instead of waiting the entire 20+ seconds for the first tab (ajax request) to finish, it only blocks around 8 seconds. See screenshot: http://i.imgur.com/wcpKae4.png It seems to be consistent, nearly 8 seconds of block every time. Any idea on this? What other fileio could be blocking? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249233,249353#msg-249353 From nginx-forum at nginx.us Thu Apr 17 05:09:44 2014 From: nginx-forum at nginx.us (cybermass) Date: Thu, 17 Apr 2014 01:09:44 -0400 Subject: outbound emails are not sent via nginx smtp proxy Message-ID: <0f39862bec16f9cd88d1923b3a65b8b9.NginxMailingListEnglish@forum.nginx.org> All emails ending at the destination note the sender as the actual backend and not nginx proxy. How can I force smtp outbound through nginx? I tried smtp_bind_address = proxy.ip.addr in main.cf for postfix. And tried adding an SNAT rule in iptables to route through the proxy with no luck. Any advice? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249355,249355#msg-249355 From joydeep.bakshi at netzrezepte.de Thu Apr 17 08:08:20 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Thu, 17 Apr 2014 13:38:20 +0530 Subject: how to configure nginx with running apache ? Message-ID: Greetings !! I am new to nginx and seeking some help from this list. I already have apache running with vhosts and like to install nginx as a frontend accelerator before apache. How can I configure nginx so that it can simply run with apache vhost ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 17 09:57:18 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 17 Apr 2014 05:57:18 -0400 Subject: how to configure nginx with running apache ? In-Reply-To: References: Message-ID: <158220279ed2d01db499897c9cbfee80.NginxMailingListEnglish@forum.nginx.org> you should make your apache listen on 127.0.0.1:80 and nginx on your external IP:80 (443 if you need ssl) did you checked the manuals in wthe wiki? http://wiki.nginx.org/Configuration -> proxying examples http://wiki.nginx.org/LikeApache-> all you need for a start after this you should check proxy_cache and different location {} - setups for your static files. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249356,249359#msg-249359 From steve at greengecko.co.nz Thu Apr 17 10:34:43 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 17 Apr 2014 22:34:43 +1200 Subject: how to configure nginx with running apache ? In-Reply-To: References: Message-ID: <534FAE43.4090608@greengecko.co.nz> Can anyone tell my what thebenefits are ( apart from .htaccess support, which I see all too often as a curse ) why anyone would do this in preference to just using a pure nginx solution? Sorry this is a bit of a hijack, but as a long time ( 1.3 on ) apache user, and nginx convert, I can't see why you would. Steve On 17/04/14 20:08, Joydeep Bakshi wrote: > Greetings !! > > I am new to nginx and seeking some help from this list. > > I already have apache running with vhosts and like to install nginx as > a frontend accelerator before apache. > How can I configure nginx so that it can simply run with apache vhost ? > > Thanks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 17 10:50:15 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 17 Apr 2014 06:50:15 -0400 Subject: how to configure nginx with running apache ? In-Reply-To: <534FAE43.4090608@greengecko.co.nz> References: <534FAE43.4090608@greengecko.co.nz> Message-ID: > Can anyone tell my what thebenefits are ( apart from .htaccess > support, > which I see all too often as a curse ) why anyone would do this in > preference to just using a pure nginx solution? > - out-of-the-box running stuff like mod_php / suphp - excessive use of rewite-rules in .htacces to make urls look like REST-like (typo3) - well tested environments Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249356,249361#msg-249361 From lists-nginx at swsystem.co.uk Thu Apr 17 10:55:57 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Thu, 17 Apr 2014 11:55:57 +0100 Subject: how to configure nginx with running apache ? In-Reply-To: <534FAE43.4090608@greengecko.co.nz> References: <534FAE43.4090608@greengecko.co.nz> Message-ID: I've had a situation whereby a legacy django install was running on apache, nginx was used for caching static content on this and to serve other locations. I have another server that started off infront of apache and one by one over several weeks vhosts were migrated to nginx with the default site just a proxy to apache. I think there's one site still like this as the site's important and the owner's too lazy^Wbusy to test the nginx version that's been setup. Steve. On 17/04/2014 11:34, Steve Holdoway wrote: > Can anyone tell my what thebenefits are ( apart from .htaccess support, which I see all too often as a curse ) why anyone would do this in preference to just using a pure nginx solution? > > Sorry this is a bit of a hijack, but as a long time ( 1.3 on ) apache user, and nginx convert, I can't see why you would. > > Steve > > On 17/04/14 20:08, Joydeep Bakshi wrote: > >> Greetings !! >> >> I am new to nginx and seeking some help from this list. >> >> I already have apache running with vhosts and like to install nginx as a frontend accelerator before apache. >> How can I configure nginx so that it can simply run with apache vhost ? >> >> Thanks >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx [1] > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx [1] Links: ------ [1] http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Thu Apr 17 10:56:49 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Thu, 17 Apr 2014 16:26:49 +0530 Subject: how to configure nginx with running apache ? In-Reply-To: References: <534FAE43.4090608@greengecko.co.nz> Message-ID: Hello, thanks for your responses and wiki link. Regarding setup nginx before apache; I already have running vhosts with apache and a lot og .htaccess rules. Hence I have to place nginx before apache without disturbing the setup. Thanks On Thu, Apr 17, 2014 at 4:20 PM, mex wrote: > > Can anyone tell my what thebenefits are ( apart from .htaccess > > support, > > which I see all too often as a curse ) why anyone would do this in > > preference to just using a pure nginx solution? > > > > - out-of-the-box running stuff like mod_php / suphp > - excessive use of rewite-rules in .htacces to make urls look like > REST-like > (typo3) > - well tested environments > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249356,249361#msg-249361 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 17 11:04:46 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 17 Apr 2014 07:04:46 -0400 Subject: how to configure nginx with running apache ? In-Reply-To: References: Message-ID: > Hence I have to place nginx before apache without disturbing the > setup. > works seemlessly and speeds up your apache, when using proxy_cache, assuming your apache listens on 8080 server { listen 80; server_name myhost; location / { root /path/to/myapp/public; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://apache:8080; } } from this basic snippet you can go on and test different setups with additional nginx-virtualhosts before making changes work in your prod-environment server { # this is for testing new setups listen 81; server_name myhost; location / { root /path/to/myapp/public; proxy_cache cache; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://apache:8080; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249356,249364#msg-249364 From nginx-forum at nginx.us Thu Apr 17 12:12:09 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 17 Apr 2014 08:12:09 -0400 Subject: 499 in a proxy environment and short execution php scripts In-Reply-To: <534DB89A.5040908@gmail.com> References: <534DB89A.5040908@gmail.com> Message-ID: <0e62af47946e714626a9a8099358bb67.NginxMailingListEnglish@forum.nginx.org> maybe you should capture the traffic with wireshark to see which party sends what packet in which order. regrads, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249320,249365#msg-249365 From joydeep.bakshi at netzrezepte.de Thu Apr 17 12:48:59 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Thu, 17 Apr 2014 18:18:59 +0530 Subject: how to configure nginx with running apache ? In-Reply-To: References: Message-ID: Hello, I have installed and configured nginx for a existing domain, but I get "it works" at the browser. I have changed all port from 80 to 8080 in apache vhost and create same vhost in /etc/nginx/vhosts.d/mydomain.conf as below server { listen 80; # Default listen port server_name site.mydomain.com; access_log /var/log/nginx/dustri.de; gzip on; # Turn on gZip gzip_disable msie6; gzip_static on; gzip_comp_level 9; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; location / { proxy_redirect off; # Do not redirect this proxy - It needs to be pass-through proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Server-Address $server_addr; proxy_pass_header Set-Cookie; #proxy_pass http://127.0.0.1:6081; # Pass all traffic through to Varnish proxy_pass http://127.0.0.1:8080; } } obviously site.mydomain.com has been replaced by the actual site to visit. What else should I configure here ? Thanks On Thu, Apr 17, 2014 at 4:34 PM, mex wrote: > > Hence I have to place nginx before apache without disturbing the > > setup. > > > > works seemlessly and speeds up your apache, when using proxy_cache, > assuming your > apache listens on 8080 > > > server { > listen 80; > server_name myhost; > location / { > root /path/to/myapp/public; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_pass http://apache:8080; > } > } > > from this basic snippet you can go on and test different setups > with additional nginx-virtualhosts before making changes work in your > prod-environment > > > server { > # this is for testing new setups > listen 81; > server_name myhost; > location / { > root /path/to/myapp/public; > proxy_cache cache; > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_pass http://apache:8080; > } > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249356,249364#msg-249364 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Thu Apr 17 14:20:09 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Thu, 17 Apr 2014 19:50:09 +0530 Subject: https not working Message-ID: Hello list, I have place nginx before apache as an accelator, and the http is working fine. But the isue is with https . Even there are some https link based on port like https://mysite.com:45 . None of the https is working. What option is available in nginx to simply handover the https protocol to apache ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From mailist at yandex.com Thu Apr 17 14:58:26 2014 From: mailist at yandex.com (=?utf-8?B?RW1yZSDDh2FtYWxhbg==?=) Date: Thu, 17 Apr 2014 17:58:26 +0300 Subject: ngx + lua Ssl certificate content Message-ID: <382471397746706@web29g.yandex.ru> Hi, My nginx_extras packages works good such as reverse proxy with Ssl certificates. I want to add LUA codes into nginx.conf file for getting Certificates contents. I added this lua codes and I checked certificates which ID start to handshake with my server. Have u got any ideas or way? thanks From siefke_listen at web.de Thu Apr 17 15:34:04 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Thu, 17 Apr 2014 17:34:04 +0200 Subject: Commodo SSL Message-ID: <20140417173404.a9c94a0fde3d2f1f8a6e4e70@web.de> Hello, i try to run HTTPS with a commodo ssl certificate. I use the follow tutorial: https://support.comodo.com/index.php?_m=knowledgebase&_a=viewarticle&kbarticleid=1365 I has use cat to write one crt file. The configuration: listen 80; listen 443 ssl spdy; ssl on; ssl_certificate /etc/nginx/keys/silviosiefke_com.crt; ssl_certificate_key /etc/nginx/keys/silviosiefke_com.key; ssl_protocols SSLv3 TLSv1; ssl_prefer_server_ciphers on; ssl_ciphers "EECDH+AESGCM EDH+AESGCM EECDH -RC4 EDH -CAMELLIA -SEED !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; But nginx want not work. Nginx give me first only: ks3374456 keys # rc-service nginx restart * Checking nginx' configuration ... nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/keys/silviosiefke_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line) nginx: configuration file /etc/nginx/nginx.conf test failed nginx: [emerg] PEM_read_bio_X509_AUX("/etc/nginx/keys/silviosiefke_com.crt") failed (SSL: error:0906D066:PEM routines:PEM_read_bio:bad end line) nginx: configuration file /etc/nginx/nginx.conf test failed * failed, please correct errors above [ !! ] * ERROR: nginx failed to stop I think was begin and end with the cat command. I clean the file and now nginx make start/stop but the connection want not works. In Opera 12.16 come only > Sichere Verbindung: Schwerer Fehler (552), same in Chrome. Firefox i has not on system. Has someone expirence with commodo and can describe the way for nginx? Thanks for help & Nice Day Silvio From reallfqq-nginx at yahoo.fr Thu Apr 17 15:43:13 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 17 Apr 2014 17:43:13 +0200 Subject: Commodo SSL In-Reply-To: <20140417173404.a9c94a0fde3d2f1f8a6e4e70@web.de> References: <20140417173404.a9c94a0fde3d2f1f8a6e4e70@web.de> Message-ID: Using http://lmgtfy.com/?q=PEM+routines%3APEM_read_bio%3Abad+end+line I found http://drewsymo.com/2013/11/fixing-openssl-error/ --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From siefke_listen at web.de Thu Apr 17 17:00:00 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Thu, 17 Apr 2014 19:00:00 +0200 Subject: Commodo SSL In-Reply-To: References: <20140417173404.a9c94a0fde3d2f1f8a6e4e70@web.de> Message-ID: <20140417190000.8b1bc835d8d3ad43f7a5ed74@web.de> On Thu, 17 Apr 2014 17:43:13 +0200 "B.R." wrote: > Using http://lmgtfy.com/?q=PEM+routines%3APEM_read_bio%3Abad+end+line > I found http://drewsymo.com/2013/11/fixing-openssl-error/ That is fixt. Google I can use, thank you. But i not use google, i open the file and fix this lines. That's not the problem, but now i become no message, no log entries nothing. Only browser say Fehlercode: ERR_SSL_PROTOCOL_ERROR > --- > *B. R.* Thank you for help & Nice Day Silvio From Venkat.Morampudi at rms.com Thu Apr 17 18:34:14 2014 From: Venkat.Morampudi at rms.com (Venkat Morampudi) Date: Thu, 17 Apr 2014 11:34:14 -0700 Subject: Intermittent failures with SecureChannelFailure error on client Message-ID: Hi, We are using NGINX (version 1.4.4) in front of HAProxy for SSl termination. We are seeing intermittent "Could not create SSL/TLS secure channel" failure from our .net client. On enabling debug logging on NGINX the following error is being recorded at the same time the client see the error. [info] 27456#0: *43842 SSL_do_handshake() failed (SSL: error:1408C095:SSL routines:SSL3_GET_FINISHED:digest check failed) while SSL handshaking, client: 10.76.121.148, server: 0.0.0.0:443 Based on the documentation I have disabled ssl session reuse, it didn't seem to help. Suggestion are really appreciated. Thanks, Venkat ________________________________ This message and any attachments contain information that may be RMS Inc. confidential and/or privileged. If you are not the intended recipient (or authorized to receive for the intended recipient), and have received this message in error, any use, disclosure or distribution is strictly prohibited. If you have received this message in error, please notify the sender immediately by replying to the e-mail and permanently deleting the message from your computer and/or storage system. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 17 19:11:19 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 17 Apr 2014 15:11:19 -0400 Subject: Commodo SSL In-Reply-To: <20140417190000.8b1bc835d8d3ad43f7a5ed74@web.de> References: <20140417190000.8b1bc835d8d3ad43f7a5ed74@web.de> Message-ID: <5d1dcb9f6e171da28c757e71a88a5883.NginxMailingListEnglish@forum.nginx.org> if your site is silviosiefke.com, there is no tls-service available on port 443 can you please paste the output of nginx -t / nginx -V ? ######################################################## testssl.sh v2.0rc2 (https://testssl.sh) ######################################################## Using "OpenSSL 1.0.1g 7 Apr 2014" on On port 443 @ silviosiefke.com seems a server but not TLS/SSL enabled. Ignore? ^C Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249370,249378#msg-249378 From nginx-forum at nginx.us Fri Apr 18 05:07:13 2014 From: nginx-forum at nginx.us (sim4life) Date: Fri, 18 Apr 2014 01:07:13 -0400 Subject: NginX reverse proxy with iRedMail Apache2 In-Reply-To: <1fd8588a24a56d351f8b593a862ca68b.NginxMailingListEnglish@forum.nginx.org> References: <1fd8588a24a56d351f8b593a862ca68b.NginxMailingListEnglish@forum.nginx.org> Message-ID: I changed the faulty line in NginX default config file to: server { listen 80 default_server; listen [::]:80; root /usr/share/nginx/html; index index.html index.htm index.php; server_name mydomain.com www.mydomain.com; location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_pass http://127.0.0.1:8080; } location ~ /\.ht { deny all; } } So now, on hitting in the browser: https://mail.mydomain.com I get the error on the browser: This webpage has a redirect loop The webpage at https://mail.mydomain.com/ has resulted in too many redirects. Clearing your cookies for this site or allowing third-party cookies may fix the problem. If not, it is possibly a server configuration issue and not a problem with your computer. The NginX error is gone but the Apache error remains the same. I think it's some config problem with setup of iRedMail so I'm going to decommission Apache2 for iRedMail setup and move the entire iRedMail setup on NginX directly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249312,249382#msg-249382 From mdounin at mdounin.ru Fri Apr 18 13:03:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 18 Apr 2014 17:03:45 +0400 Subject: Intermittent failures with SecureChannelFailure error on client In-Reply-To: References: Message-ID: <20140418130345.GN34696@mdounin.ru> Hello! On Thu, Apr 17, 2014 at 11:34:14AM -0700, Venkat Morampudi wrote: > Hi, > > We are using NGINX (version 1.4.4) in front of HAProxy for SSl > termination. We are seeing intermittent "Could not create > SSL/TLS secure channel" failure from our .net client. On > enabling debug logging on NGINX the following error is being > recorded at the same time the client see the error. > > [info] 27456#0: *43842 SSL_do_handshake() failed (SSL: > error:1408C095:SSL routines:SSL3_GET_FINISHED:digest check > failed) while SSL handshaking, client: 10.76.121.148, server: > 0.0.0.0:443 >From the error message it looks like that handshake failed due to incorrect digest value got from the client. Do you control network and are able to eliminate a possibility of real man-in-the-middle attack? If yes, this is likely a bug either in the client or in OpenSSL library on nginx side. Some things to test, in no particular order: - A workaround from here may work, as well as advise to obtain more details from the client: http://stackoverflow.com/questions/2078682/net-httpwebrequest-https-error - Try to add SSL_OP_TLS_ROLLBACK_BUG option in nginx, it may help in case of some client bugs which used to result in digest check failures (see "man SSL_set_options" for details). - Checking if the problem persists with latest OpenSSL library (or, vice versa, with old good 0.9.8*) may be beneficial, as well as upgrading nginx to at least latest 1.4.x version. > Based on the documentation I have disabled ssl session reuse, it > didn't seem to help. Did you do this in your .net client? [...] > This message and any attachments contain information that may be > RMS Inc. confidential and/or privileged. If you are not the > intended recipient (or authorized to receive for the intended Just a side note: sending messages to the whole world with such a disclaimer looks silly. -- Maxim Dounin http://nginx.org/ From siefke_listen at web.de Fri Apr 18 14:29:09 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Fri, 18 Apr 2014 16:29:09 +0200 Subject: Commodo SSL In-Reply-To: <5d1dcb9f6e171da28c757e71a88a5883.NginxMailingListEnglish@forum.nginx.org> References: <20140417190000.8b1bc835d8d3ad43f7a5ed74@web.de> <5d1dcb9f6e171da28c757e71a88a5883.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140418162909.95552795ba637b52537b2514@web.de> Hello, On Thu, 17 Apr 2014 15:11:19 -0400 "mex" wrote: > if your site is silviosiefke.com, there is no tls-service available > on port 443 I has checked with nmap, this say me is open. > can you please paste the output of nginx -t / nginx -V ? ks3374456 siefke # nginx -t nginx: the configuration file /etc/nginx/nginx.conf syntax is ok nginx: configuration file /etc/nginx/nginx.conf test is successful ks3374456 siefke # nginx -V nginx version: nginx/1.4.7 TLS SNI support enabled configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include --with-ld-opt=-L/usr/lib --http-log-path=/var/log/nginx/access_log --http-client-body-temp-path=//var/lib/nginx/tmp/client --http-proxy-temp-path=//var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=//var/lib/nginx/tmp/fastcgi --http-scgi-temp-path=//var/lib/nginx/tmp/scgi --http-uwsgi-temp-path=//var/lib/nginx/tmp/uwsgi --with-ipv6 --with-pcre --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx_syslog_patch-165affd9741f0e30c4c8225da5e487d33832aca3 --without-http_limit_conn_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_spdy_module --with-http_stub_status_module --with-http_sub_module --with-http_xslt_module --with-http_realip_module --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/headers-more-nginx-module-0.25 --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx_http_push_module-0.712 --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/ngx_slowfs_cache-1.10 --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/ngx-fancyindex-0.3.3 --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/ngx_http_auth_pam_module-1.3 --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx-dav-ext-module-0.0.3 --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx-push-stream-module-0.4.0 --with-http_ssl_module --without-mail_imap_module --without-mail_pop3_module --without-mail_smtp_module --user=nginx --group=nginx > ######################################################## > testssl.sh v2.0rc2 (https://testssl.sh) > ######################################################## > > Using "OpenSSL 1.0.1g 7 Apr 2014" on > > > On port 443 @ silviosiefke.com seems a server but not TLS/SSL enabled. > Ignore? ^C mmh that not understand. With cacert.org the server run without probs, but the problem is that cacert not accept most of browser. Why not accept commodo ssl? Thank you for help & Nice Day Silvio From ar at xlrs.de Sat Apr 19 15:15:58 2014 From: ar at xlrs.de (Axel) Date: Sat, 19 Apr 2014 17:15:58 +0200 Subject: Commodo SSL In-Reply-To: <20140418162909.95552795ba637b52537b2514@web.de> References: <20140417190000.8b1bc835d8d3ad43f7a5ed74@web.de> <5d1dcb9f6e171da28c757e71a88a5883.NginxMailingListEnglish@forum.nginx.org> <20140418162909.95552795ba637b52537b2514@web.de> Message-ID: <72d39c41d80443f5c564e3f966b7f3d6@xlrs.de> Hello, i had some problems with Comodo SSL Certs, too. I don't know your error message, but the Howto which you linked here is old. It looks like Comodo had to replace their certificate chain and did not update the howto. I used portecle (you can get it from sf.net) to examine the certificates. rgds, Axel On 2014-04-18 16:29, Silvio Siefke wrote: > Hello, > > On Thu, 17 Apr 2014 15:11:19 -0400 "mex" wrote: > >> if your site is silviosiefke.com, there is no tls-service available >> on port 443 > > I has checked with nmap, this say me is open. > >> can you please paste the output of nginx -t / nginx -V ? > > ks3374456 siefke # nginx -t > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > nginx: configuration file /etc/nginx/nginx.conf test is successful > > ks3374456 siefke # nginx -V > nginx version: nginx/1.4.7 > TLS SNI support enabled > configure arguments: --prefix=/usr --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error_log --pid-path=/run/nginx.pid > --lock-path=/run/lock/nginx.lock --with-cc-opt=-I/usr/include > --with-ld-opt=-L/usr/lib --http-log-path=/var/log/nginx/access_log > --http-client-body-temp-path=//var/lib/nginx/tmp/client > --http-proxy-temp-path=//var/lib/nginx/tmp/proxy > --http-fastcgi-temp-path=//var/lib/nginx/tmp/fastcgi > --http-scgi-temp-path=//var/lib/nginx/tmp/scgi > --http-uwsgi-temp-path=//var/lib/nginx/tmp/uwsgi --with-ipv6 > --with-pcre > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx_syslog_patch-165affd9741f0e30c4c8225da5e487d33832aca3 > --without-http_limit_conn_module --with-http_addition_module > --with-http_dav_module --with-http_flv_module --with-http_geoip_module > --with-http_gunzip_module --with-http_gzip_static_module > --with-http_mp4_module --with-http_perl_module > --with-http_random_index_module --with-http_realip_module > --with-http_spdy_module --with-ht > tp_stub_ > status_module --with-http_sub_module --with-http_xslt_module > --with-http_realip_module > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/headers-more-nginx-module-0.25 > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx_http_push_module-0.712 > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/ngx_slowfs_cache-1.10 > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/ngx-fancyindex-0.3.3 > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/ngx_http_auth_pam_module-1.3 > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx-dav-ext-module-0.0.3 > --add-module=/var/tmp/portage/www-servers/nginx-1.4.7/work/nginx-push-stream-module-0.4.0 > --with-http_ssl_module --without-mail_imap_module > --without-mail_pop3_module --without-mail_smtp_module --user=nginx > --group=nginx > > > >> ######################################################## >> testssl.sh v2.0rc2 (https://testssl.sh) >> ######################################################## >> >> Using "OpenSSL 1.0.1g 7 Apr 2014" on >> >> >> On port 443 @ silviosiefke.com seems a server but not TLS/SSL enabled. >> Ignore? ^C > > > mmh that not understand. With cacert.org the server run without probs, > but > the problem is that cacert not accept most of browser. Why not accept > commodo > ssl? > > > Thank you for help & Nice Day > Silvio > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat Apr 19 17:49:51 2014 From: nginx-forum at nginx.us (google000) Date: Sat, 19 Apr 2014 13:49:51 -0400 Subject: How to save into variable proxy header response? Message-ID: <73a71b9e12d186a9aba4c25bfd0688b4.NginxMailingListEnglish@forum.nginx.org> Hello I'm using nginx as reverse proxy for some domain .. my configuration for domain looks like that : server { listen 80; server_name www.mydomain.com mydomain.com; index index.html index.htm index.php default.html default.htm; location / { proxy_buffering off; proxy_max_temp_file_size 0; proxy_pass http://www.domainresultget.com; }} response header from this domain "www.domainresultget.com" looks like that .. HTTP/1.1 200 OK CF-RAY: 11d864fce3900a9c-PRG Connection: keep-alive Content-Disposition: attachment; filename="filename_title_download.exe" Content-Length: 351744 Content-Type: application/x-msdownload Date: Sat, 19 Apr 2014 10:24:48 GMT Server: nginx X-Powered-By: PHP/5.3.3 I want to save filename title in to variable for future use ? I mean this one "filename_title_download.exe" I want to save this in to variable.. Can someone explain me how to do this ? thank you so much, and sorry for my english. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249402,249402#msg-249402 From mdounin at mdounin.ru Sat Apr 19 19:37:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 19 Apr 2014 23:37:11 +0400 Subject: How to save into variable proxy header response? In-Reply-To: <73a71b9e12d186a9aba4c25bfd0688b4.NginxMailingListEnglish@forum.nginx.org> References: <73a71b9e12d186a9aba4c25bfd0688b4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140419193710.GU34696@mdounin.ru> Hello! On Sat, Apr 19, 2014 at 01:49:51PM -0400, google000 wrote: > Hello I'm using nginx as reverse proxy for some domain .. my configuration > for domain looks like that : > > server > { > listen 80; > server_name www.mydomain.com mydomain.com; > index index.html index.htm index.php default.html > default.htm; > location / { > proxy_buffering off; > proxy_max_temp_file_size 0; > proxy_pass http://www.domainresultget.com; > }} > > response header from this domain "www.domainresultget.com" looks like that > .. > > HTTP/1.1 200 OK > CF-RAY: 11d864fce3900a9c-PRG > Connection: keep-alive > Content-Disposition: attachment; filename="filename_title_download.exe" > Content-Length: 351744 > Content-Type: application/x-msdownload > Date: Sat, 19 Apr 2014 10:24:48 GMT > Server: nginx > X-Powered-By: PHP/5.3.3 > > I want to save filename title in to variable for future use ? I mean this > one "filename_title_download.exe" I want to save this in to variable.. Can > someone explain me how to do this ? The Content-Disposition header value from upstream response is available as $upstream_http_content_disposition variable (see http://nginx.org/r/$upstream_http_). The map module can be used to extract part of the value, see http://nginx.org/r/map. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sat Apr 19 20:50:19 2014 From: nginx-forum at nginx.us (elmono) Date: Sat, 19 Apr 2014 16:50:19 -0400 Subject: Configure multiple subdomains and location blocks for reverse-proxy. Message-ID: <4844ab8b46f53eac7d9a935a8ffc2230.NginxMailingListEnglish@forum.nginx.org> I have not been able to successfully locate the information I need to do this since many of the examples online are our of date or use bad practices so I will ask here. First, my setup is as follows: 1. domain_i_own.tld ==> my_public_ip 2. my_server running on port 8083 3. Firewall wide open Requirement: 1. domain_i_own.tld/my_server ==> domain_i_own.tld:8083 2. subdomain01.domain_i_own.tld ==> domain_i_own.tld/my_server ==> domain_i_own.tld:8083 My configuration: upstream my_server { server localhost:8083; } server { listen 0.0.0.0:80; server_name domain_i_own.tld; root /var/www/domain_i_own.tld; location /my_server { proxy_pass http://my_server; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect http://my_server http://$host; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Result: Not Found The requested URL /my_server/ was not found on this server. my_server HTTP Server listening at localhost Port 8083 If I directly go to http://domain_i_own.tld:8083 ==> Success. If I directly go to http://10.0.0.1:8083 ===> Success. Thus, I know the services are all running properly. However, the reverse proxy is not. It is not clear, exactly, why this is happening. Is the location block being ignored? I would like to get this part solved before I tackle the subdomain problem. Thanks in advance for any help you may be able to provide. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249404,249404#msg-249404 From nginx-forum at nginx.us Sun Apr 20 09:29:10 2014 From: nginx-forum at nginx.us (google000) Date: Sun, 20 Apr 2014 05:29:10 -0400 Subject: How to save into variable proxy header response? In-Reply-To: <20140419193710.GU34696@mdounin.ru> References: <20140419193710.GU34696@mdounin.ru> Message-ID: I have tryeid to do something like this but it seems that variable $upstream_http_content_disposition is empty.. when I want to redirect to different url.. proxy_pass http://www.domainresultget.com; set $x $upstream_http_content_disposition; return 301 http://domaintest.com/test.php?x=$x; variable $x is empty.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249402,249405#msg-249405 From nginx-forum at nginx.us Sun Apr 20 15:38:11 2014 From: nginx-forum at nginx.us (google000) Date: Sun, 20 Apr 2014 11:38:11 -0400 Subject: How to save into variable proxy header response? In-Reply-To: References: <20140419193710.GU34696@mdounin.ru> Message-ID: Can someone help me with configuration how to grab this filename and save into variable using map module ? I'm trying 2 days and still not working for me :(( Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249402,249406#msg-249406 From luky-37 at hotmail.com Sun Apr 20 17:17:37 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 20 Apr 2014 19:17:37 +0200 Subject: nginx ssl handshake vs apache In-Reply-To: <52C6CD31.3030908@kearsley.me> References: <52C6CD31.3030908@kearsley.me> Message-ID: > Hi > > I was watching this video by fastly ceo http://youtu.be/zrSvoQz1GOs?t=24m44s > he talks about the nginx ssl handshake versus apache and comes to the > conclusion that apache was more efficient at mass handshakes due to > nginx blocking while it calls back to openssl > > I was hoping to get other people's opinion on this and find out if what > he says is accurate or not I would be interested in opinions/statements about this as well. Regards, Lukas From joydeep.bakshi at netzrezepte.de Mon Apr 21 06:01:08 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 21 Apr 2014 11:31:08 +0530 Subject: how to allow apache to control SSL traffic ? Message-ID: Hello list, My apache vhosts are configured to take care of SSL connections. I have installed nginix as http accelerator. How can I instruct nginx to pass all SSL request to apache SSL vhost ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From benimaur at gmail.com Mon Apr 21 08:14:18 2014 From: benimaur at gmail.com (Benimaur Gao) Date: Mon, 21 Apr 2014 16:14:18 +0800 Subject: Problem with http/https interoperation Message-ID: Hello, all I have a web application, which use nginx as frontend reverse-proxy sever. It's configured to use https to interact with user agent. Several tomcats are used as backend application servers. The connection between nginx and tomcats is http. The network structure is illustrated as following: Browser -- https --> nginx --- http --> tomcat1 \-- http --> tomcat2 The normal request works ok. Problem comes when my program returns 302/Redirection to user agent. Since tomcat has no idea about https' environment, the location field in http response header is set as: "Location: http://redirect.url.com". After receiving that, the browser will launch subsequent requests through http. Now I overcome this by redirecting all requests towards 80 to 443: server{ listen 80; server_name *.url.com; rewrite ^(.*) https://$server_name$1 permanent; } but this method has two obvious defects: 1. the first request launch by use agent is still http, which will incur security. 2. nginx rewrite module works by returning 301/Move Permanently, I think too much redirection between user agent and my application will result in poor efficiency. I wonder if there is some ready-made module to manipulate the 302 Location header, or any other better ways to workaround this problem. -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Mon Apr 21 08:17:57 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 21 Apr 2014 13:47:57 +0530 Subject: how to allow apache to control SSL traffic ? In-Reply-To: References: Message-ID: Hello, I like to mention the following error from nginx log *453 SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream though the ssl is working fine with apache along (after proper modification in apache vhost ) when nginx is down. Any clue please ? Thanks On Mon, Apr 21, 2014 at 11:31 AM, Joydeep Bakshi < joydeep.bakshi at netzrezepte.de> wrote: > Hello list, > > My apache vhosts are configured to take care of SSL connections. I have > installed nginix as http accelerator. How can I instruct nginx to pass all > SSL request to apache SSL vhost ? > > Thanks > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Apr 21 08:46:54 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 21 Apr 2014 12:46:54 +0400 Subject: nginx ssl handshake vs apache In-Reply-To: References: <52C6CD31.3030908@kearsley.me> Message-ID: On Apr 20, 2014, at 21:17 , Lukas Tribus wrote: >> Hi >> >> I was watching this video by fastly ceo http://youtu.be/zrSvoQz1GOs?t=24m44s >> he talks about the nginx ssl handshake versus apache and comes to the >> conclusion that apache was more efficient at mass handshakes due to >> nginx blocking while it calls back to openssl >> >> I was hoping to get other people's opinion on this and find out if what >> he says is accurate or not > > I would be interested in opinions/statements about this as well. nginx worker blocks on SSL handshake as well as on disk I/O. But in contrast to the disk I/O during the handshake you can not do anything else on this CPU at this time until CPU time share slice will end for the current process/thread. If a typical time share slice is 1ms and the 1024-bit key handshake time is 0.5ms then the chances that another process/thread is able to run on the same CPU are 25%. The lesser handshake time the lesser chances. -- Igor Sysoev http://nginx.com From contact at jpluscplusm.com Mon Apr 21 08:48:34 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 21 Apr 2014 09:48:34 +0100 Subject: how to allow apache to control SSL traffic ? In-Reply-To: References: Message-ID: On 21 Apr 2014 07:01, "Joydeep Bakshi" wrote: > > Hello list, > > My apache vhosts are configured to take care of SSL connections. I have installed nginix as http accelerator. How can I instruct nginx to pass all SSL request to apache SSL vhost ? Most simply, try stopping nginx listening on port 443 and make apache listen on 443. If you want more advanced suggestions than that, you'll probably have to explain what you're trying to do in more detail. J -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Mon Apr 21 09:30:22 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 21 Apr 2014 15:00:22 +0530 Subject: how to allow apache to control SSL traffic ? In-Reply-To: References: Message-ID: Hello Jonathan, thanks for your response. Here is the details what I have done so far. SSL configuration for nginx is as below server { listen 443 ssl; server_name example.com ; gzip on; # Turn on gZip gzip_disable msie6; gzip_static on; gzip_comp_level 9; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; ssl_certificate /etc/apache2/myca/server.crt; ssl_certificate_key /etc/apache2/myca/ssl.key; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_redirect off; # Do not redirect this proxy - It needs to be pass-through proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Server-Address $server_addr; proxy_pass_header Set-Cookie; proxy_pass https://127.0.0.1:4443; } } accordingly apache has Listen 4443 # General setup for the virtual host DocumentRoot /srv/www/htdocs/xxx SSLEngine on #Here, I am allowing only "high" and "medium" security key lengths. SSLCipherSuite HIGH:MEDIUM #Here I am allowing SSLv3 and TLSv1, I am NOT allowing the old SSLv2. SSLProtocol all -SSLv2 #Server Certificate: SSLCertificateFile /etc/apache2/myca/server.crt #Server Private Key: SSLCertificateKeyFile /etc/apache2/myca/ssl.key # Server Certificate Chain SSLCertificateChainFile /etc/apache2/myca/ssl.crt SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW DirectoryIndex index.php Options Indexes FollowSymLinks MultiViews AllowOverride ALL Options None Order allow,deny Allow from all but when try to access SSL , nginx error.log shows *453 SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream Hope the info help Thanks On Mon, Apr 21, 2014 at 2:18 PM, Jonathan Matthews wrote: > On 21 Apr 2014 07:01, "Joydeep Bakshi" > wrote: > > > > Hello list, > > > > My apache vhosts are configured to take care of SSL connections. I have > installed nginix as http accelerator. How can I instruct nginx to pass all > SSL request to apache SSL vhost ? > > Most simply, try stopping nginx listening on port 443 and make apache > listen on 443. > > If you want more advanced suggestions than that, you'll probably have to > explain what you're trying to do in more detail. > > J > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Mon Apr 21 09:36:29 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 21 Apr 2014 15:06:29 +0530 Subject: how to allow apache to control SSL traffic ? In-Reply-To: References: Message-ID: Hello Jonathan, thanks for your response. Here is the details what I have done so far. SSL configuration for nginx is as below server { listen 443 ssl; server_name example.com ; gzip on; # Turn on gZip gzip_disable msie6; gzip_static on; gzip_comp_level 9; gzip_proxied any; gzip_types text/plain text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript; ssl_certificate /etc/apache2/myca/server.crt; ssl_certificate_key /etc/apache2/myca/ssl.key; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_redirect off; # Do not redirect this proxy - It needs to be pass-through proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Server-Address $server_addr; proxy_pass_header Set-Cookie; proxy_pass https://127.0.0.1:4443; } } accordingly apache has Listen 4443 # General setup for the virtual host DocumentRoot /srv/www/htdocs/xxx SSLEngine on #Here, I am allowing only "high" and "medium" security key lengths. SSLCipherSuite HIGH:MEDIUM #Here I am allowing SSLv3 and TLSv1, I am NOT allowing the old SSLv2. SSLProtocol all -SSLv2 #Server Certificate: SSLCertificateFile /etc/apache2/myca/server.crt #Server Private Key: SSLCertificateKeyFile /etc/apache2/myca/ssl.key # Server Certificate Chain SSLCertificateChainFile /etc/apache2/myca/ssl.crt SSLCipherSuite ALL:!ADH:!EXPORT:!SSLv2:RC4+RSA:+HIGH:+MEDIUM:+LOW DirectoryIndex index.php Options Indexes FollowSymLinks MultiViews AllowOverride ALL Options None Order allow,deny Allow from all but when try to access SSL , nginx error.log shows *453 SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream Hope the info help Thanks On Mon, Apr 21, 2014 at 2:18 PM, Jonathan Matthews wrote: > On 21 Apr 2014 07:01, "Joydeep Bakshi" > wrote: > > > > Hello list, > > > > My apache vhosts are configured to take care of SSL connections. I have > installed nginix as http accelerator. How can I instruct nginx to pass all > SSL request to apache SSL vhost ? > > Most simply, try stopping nginx listening on port 443 and make apache > listen on 443. > > If you want more advanced suggestions than that, you'll probably have to > explain what you're trying to do in more detail. > > J > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Apr 21 10:03:45 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 21 Apr 2014 14:03:45 +0400 Subject: nginx ssl handshake vs apache In-Reply-To: References: <52C6CD31.3030908@kearsley.me> Message-ID: On Apr 21, 2014, at 12:46 , Igor Sysoev wrote: > On Apr 20, 2014, at 21:17 , Lukas Tribus wrote: > >>> Hi >>> >>> I was watching this video by fastly ceo http://youtu.be/zrSvoQz1GOs?t=24m44s >>> he talks about the nginx ssl handshake versus apache and comes to the >>> conclusion that apache was more efficient at mass handshakes due to >>> nginx blocking while it calls back to openssl >>> >>> I was hoping to get other people's opinion on this and find out if what >>> he says is accurate or not >> >> I would be interested in opinions/statements about this as well. > > nginx worker blocks on SSL handshake as well as on disk I/O. But in contrast > to the disk I/O during the handshake you can not do anything else on this CPU > at this time until CPU time share slice will end for the current process/thread. > If a typical time share slice is 1ms and the 1024-bit key handshake time is 0.5ms > then the chances that another process/thread is able to run on the same CPU are 25%. Sorry, 50%. > The lesser handshake time the lesser chances. -- Igor Sysoev http://nginx.com From lists at ruby-forum.com Mon Apr 21 10:05:52 2014 From: lists at ruby-forum.com (Swarts Ette) Date: Mon, 21 Apr 2014 12:05:52 +0200 Subject: For sale USRP N210 Message-ID: <91f99d827d4790f870e079ba91ec56f8@ruby-forum.com> USRP N210 The N210 radio comes with LFTX, LFRX, WBX, and SBX draughtboards. 4 Antennas 2 VERT900s and 2 VERT2450s. Asking: $1800. Email: kf5mml at yahoo.com What is your full name and address for shipping? I can ship worldwide. -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Mon Apr 21 10:55:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Apr 2014 14:55:46 +0400 Subject: How to save into variable proxy header response? In-Reply-To: References: <20140419193710.GU34696@mdounin.ru> Message-ID: <20140421105545.GV34696@mdounin.ru> Hello! On Sun, Apr 20, 2014 at 05:29:10AM -0400, google000 wrote: > I have tryeid to do something like this but it seems that variable > $upstream_http_content_disposition is empty.. when I want to redirect to > different url.. > > proxy_pass http://www.domainresultget.com; > set $x $upstream_http_content_disposition; > return 301 http://domaintest.com/test.php?x=$x; > > variable $x is empty.. This is not going work, as "return 301" will happen before the proxy_pass. See here for basic explanation on how rewrite module works: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html. It looks like you want nginx to do a request to upstream server, and then return a completely different response to a client. In case of 200 responses, this is something that can be done, e.g., by using auth request module, see here: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html It might also be a good idea to change your backends to actually return responses you want to be returned to clients instead of trying to change them with nginx configs. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 21 10:59:52 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Apr 2014 14:59:52 +0400 Subject: Problem with http/https interoperation In-Reply-To: References: Message-ID: <20140421105952.GW34696@mdounin.ru> Hello! On Mon, Apr 21, 2014 at 04:14:18PM +0800, Benimaur Gao wrote: > Hello, all > > I have a web application, which use nginx as frontend reverse-proxy > sever. It's configured to use https to interact with user agent. Several > tomcats are used as backend application servers. The connection between > nginx and tomcats is http. The network structure is illustrated as > following: > > Browser -- https --> nginx --- http --> tomcat1 > \-- http --> tomcat2 > > The normal request works ok. Problem comes when my program returns > 302/Redirection to user agent. Since tomcat has no idea about https' > environment, the location field in http response header is set as: > "Location: http://redirect.url.com". After receiving that, the browser will > launch subsequent requests through http. [...] > I wonder if there is some ready-made module to manipulate the 302 > Location header, or any other better ways to workaround this problem. http://nginx.org/r/proxy_redirect -- Maxim Dounin http://nginx.org/ From etienne.champetier at free.fr Mon Apr 21 11:03:50 2014 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Mon, 21 Apr 2014 13:03:50 +0200 (CEST) Subject: X-Real-Proto ? In-Reply-To: <47600860.100353627.1398077983130.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <664648784.100361359.1398078230463.JavaMail.root@zimbra65-e11.priv.proxad.net> Hi, There is ngx_http_realip_module to have the real ip in nginx when you are behind a load balancer If the load balancer is also terminating the ssl, and connecting to nginx with http, how to set the real proto ? Thanks in advance Etienne From igor at sysoev.ru Mon Apr 21 11:47:56 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 21 Apr 2014 15:47:56 +0400 Subject: X-Real-Proto ? In-Reply-To: <664648784.100361359.1398078230463.JavaMail.root@zimbra65-e11.priv.proxad.net> References: <664648784.100361359.1398078230463.JavaMail.root@zimbra65-e11.priv.proxad.net> Message-ID: <9B39E7D1-FEC1-45B5-AC1F-6CCE6C27D9E2@sysoev.ru> On Apr 21, 2014, at 15:03 , etienne.champetier at free.fr wrote: > Hi, > > There is ngx_http_realip_module to have the real ip in nginx when you are behind a load balancer > If the load balancer is also terminating the ssl, and connecting to nginx with http, how to set the real proto ? proxy_set_header X-Real-Proto $scheme; http://nginx.org/r/$scheme -- Igor Sysoev http://nginx.com From igor at sysoev.ru Mon Apr 21 11:56:41 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 21 Apr 2014 15:56:41 +0400 Subject: X-Real-Proto ? In-Reply-To: <9B39E7D1-FEC1-45B5-AC1F-6CCE6C27D9E2@sysoev.ru> References: <664648784.100361359.1398078230463.JavaMail.root@zimbra65-e11.priv.proxad.net> <9B39E7D1-FEC1-45B5-AC1F-6CCE6C27D9E2@sysoev.ru> Message-ID: <54416D6A-834A-48DA-AF1A-478AC453A844@sysoev.ru> On Apr 21, 2014, at 15:47 , Igor Sysoev wrote: > On Apr 21, 2014, at 15:03 , etienne.champetier at free.fr wrote: > >> Hi, >> >> There is ngx_http_realip_module to have the real ip in nginx when you are behind a load balancer >> If the load balancer is also terminating the ssl, and connecting to nginx with http, how to set the real proto ? > > proxy_set_header X-Real-Proto $scheme; > > http://nginx.org/r/$scheme Sorry, misread your question. A load balancer can set any protocol name in any header, but it is impossible to switch to this protocol at this phase. -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Mon Apr 21 14:05:19 2014 From: nginx-forum at nginx.us (guyguy333) Date: Mon, 21 Apr 2014 10:05:19 -0400 Subject: feature request: smtp auth passthrough In-Reply-To: <52C2A77C.30205@14v.de> References: <52C2A77C.30205@14v.de> Message-ID: <5b09192ab079e80223f747f097ff4ecc.NginxMailingListEnglish@forum.nginx.org> Here you can find my patch to add SMTP AUTH LOGIN to backend server : https://github.com/guyguy333/nginx/commit/09ac17efa8cc28bf758d03ddafbccea663fa4779 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245939,249425#msg-249425 From joydeep.bakshi at netzrezepte.de Tue Apr 22 07:25:07 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 12:55:07 +0530 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish Message-ID: Hello all, My setting works well through nginx->apache but not through nginx->varnish->apache apache is configured to listen to port 8080 . when nginx uses proxy_pass http://127.0.0.1:8080 the sites are running fine. If I introduce varnish after nginx by [proxy_pass http://127.0.0.1:6082] the nginx starts throwing following error and browser also shows "*Zero Sized Reply"* [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while reading response header from upstream and /var/log/messages shows varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown request.#012Type 'help' for more info.#012all commands are in lower-case. varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd Cache-Control: max-age=0 obviously varnish is configured to listen to apache backend default { .host = "127.0.0.1"; .port = "8080"; } Can anyone please suggest the possible reason which is causing the problem ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Apr 22 08:09:52 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 22 Apr 2014 10:09:52 +0200 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: Message-ID: > Hello all,? >? > My setting works well through nginx->apache but not through? > nginx->varnish->apache? >? > apache is configured to listen to port 8080 . when nginx uses? >? > proxy_pass http://127.0.0.1:8080? >? > the sites are running fine.? >? > If I introduce varnish after nginx by [proxy_pass? > http://127.0.0.1:6082] the nginx starts throwing following error and? > browser also shows "Zero Sized Reply"? >? >? > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while? > reading response header from upstream? >? > and /var/log/messages shows? >? > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101? > Unknown request.#012Type 'help' for more info.#012all commands are in? > lower-case.? >? > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd? > Cache-Control: max-age=0 Can you capture the tcp 6082 traffic and post the entire HTTP conversation? Lukas From joydeep.bakshi at netzrezepte.de Tue Apr 22 08:45:26 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 14:15:26 +0530 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: Message-ID: Hello Lukas, I have just checked and found nothing # tcpdump -vv port 6082 tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes On Tue, Apr 22, 2014 at 1:39 PM, Lukas Tribus wrote: > > Hello all, > > > > My setting works well through nginx->apache but not through > > nginx->varnish->apache > > > > apache is configured to listen to port 8080 . when nginx uses > > > > proxy_pass http://127.0.0.1:8080 > > > > the sites are running fine. > > > > If I introduce varnish after nginx by [proxy_pass > > http://127.0.0.1:6082] the nginx starts throwing following error and > > browser also shows "Zero Sized Reply" > > > > > > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while > > reading response header from upstream > > > > and /var/log/messages shows > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 > > Unknown request.#012Type 'help' for more info.#012all commands are in > > lower-case. > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd > > Cache-Control: max-age=0 > > Can you capture the tcp 6082 traffic and post the entire HTTP conversation? > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Apr 22 08:55:53 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 22 Apr 2014 10:55:53 +0200 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: , , Message-ID: > Hello Lukas, > > I have just checked and found nothing > > # tcpdump -vv port 6082 > tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size > 65535 bytes Fix your capture. - capture the traffic on loopback, not eth0 - don't truncate packets (-s 0) - write in a cap file, tcpdump output is not very helpful $ tcpdump -i lo -s0 -w tcp6082-traffic.cap port 6082 Lukas From joydeep.bakshi at netzrezepte.de Tue Apr 22 09:04:17 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 14:34:17 +0530 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: Message-ID: Thanks Lukas, here are the O/P 324?241^B^@^D^@^@^@^@^@^@^@^@^@377377^@^@^A^@^@^@N/VS}D^K^@B^@^@^@B^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@4212277@ ^@@^F262^B^?^@^@^A^?^@^@^A 244377^W302^\241262N^@^@^@^@200?252376(^@^@^B^D377327^A^A^D^B^A^C^C^GN/VS221D^K^@B^@^@^@B^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@4^@^@@^@@^F<302^?^@^@^A^?^@^@^A^W?377:200/R^\241262O200R252252376(^@^@^B^D377327^A^A^D^B^A^C^C^GN/VS240D^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212300@ ^@@^F262^M^?^@^@^A^?^@^@^A244377^W302^\241262O:200/SP^P^AV376^\^@^@N/VS317D^K^@230^C^@^@230^C^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^C 212212301@^@@^F256250^?^@^@^A^?^@^@^A244377^W302^\241262O:200/SP^X^AV^A^?^@^@GET / HTTP/1.0 Host: dustri.bookopt.de X-Real-IP: 198.168.1.4 X-Forwarded-For: 198.168.1.4 X-Server-Address: 198.168.1.2 Connection: closeUser-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.9; rv:27.0) Gecko/20100101 Firefox/27.0 Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en-US,en;q=0.5 Accept-Encoding: gzip, deflate Cookie: PHPSESSID=itbms92ndcea9o8f4tcbfpsm2r2na3o0ioebelh9b0fucs06mpr1; fe_typo_user=a9682cd3423a1e558c956ab49641251b; currency=USD; location=OT; __utma=49103283.176983647.1397562265.1397562265.1397562265.1; __utmc=49103283; __utmz=49103283.1397562265.1.1.utmcsr=(direct)|utmccn=(direct)|utmcmd=(none); be_typo_user=a49b1cc985e75a936d94171a8ace0a4b; mypanel=up; typo3-login-cookiecheck=true; phpMyAdmin=itbms92ndcea9o8f4tcbfpsm2r2na3o0ioebelh9b0fucs06mpr1 Cache-Control: max-age=0 N/VS337D^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(^W242@ ^@@^F%,^?^@^@^A^?^@^@^A^W?377:200/S^\241265261P^P^Ac376^\^@^@N/VS0E^K^@^P^A^@^@^P^A^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^A^B^W243@^@@^F$O^?^@^@^A^?^@^@^A^W?377:200/S^\241265261P^X^Ac376366^@^@200 204 -----------------------------Varnish Cache CLI 1.0 ----------------------------- Linux,3.11.10-7-default,x86_64,-sfile,-smalloc,-hcritbit Type 'help' for command list. Type 'quit' to close CLI session. N/VSYE^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212302@ ^@@^F262^K^?^@^@^A^?^@^@^A244377^W302^\241265261:2000-P^P^A^376^\^@^@N/VS275E^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W244@^@@^F$316^?^@^@^A^?^@^@^A^W?377:2000-^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VS315E^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212303@ ^@@^F262^?^@^@^A^?^@^@^A244377^W302^\241265261:2000207P^P^A^376^\^@^@N/VF^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W245@^@@^F$315^?^@^@^A^?^@^@^A^W?377:2000207^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VS^TF^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212304@^@@^F262 ^?^@^@^A^?^@^@^A244377^W302^\241265261:2000341P^P^A^376^\^@^@N/VS9F^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W246@^@@^F$314^?^@^@^A^?^@^@^A^W?377:2000341^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VSFF^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212305@ ^@@^F262^H^?^@^@^A^?^@^@^A244377^W302^\241265261:2001;P^P^A^376^\^@^@N/VSrF ^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W247@^@@^F$313^?^@^@^A^?^@^@^A^W?377:2001;^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VS~F^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212306@ ^@@^F262^G^?^@^@^A^?^@^@^A244377^W302^\241265261:2001225P^P^A^376^\^@^@N/VS 242F^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W250@^@@^F$312^?^@^@^A^?^@^@^A^W?377:2001225^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VS262F^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212307@ ^@@^F262^F^?^@^@^A^?^@^@^A244377^W302^\241265261:2001357P^P^A^376^\^@^@N/VS355F^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W251@^@@^F$311^?^@^@^A^?^@^@^A^W?377:2001357^\241265261P^X^Ac376v^@^@101 76 lines 18-49 N/VS#Y^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212314@ ^@@^F262^A^?^@^@^A^?^@^@^A244377^W302^\241265261:2003261P^P^A^376^\^@^@N/VS"]^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W256@^@@^F$304^?^@^@^A^?^@^@^A^W?377:2003261^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VSN]^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212315@ ^@@^F262^@^?^@^@^A^?^@^@^A244377^W302^\241265261:2004^KP^P^A^376^\^@^@N/VS 336a^K^@220^@^@^@220^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^B^@202^W257@^@@^F$303^?^@^@^A^?^@^@^A^W?377:2004^K^\241265261P^X^Ac376v^@^@101 76 Unknown request. Type 'help' for more info. all commands are in lower-case. N/VS^Fb^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212316@ ^@@^F261377^?^@^@^A^?^@^@^A244377^W302^\241265261:2004eP^P^A^376^\^@^@212/VS320U^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212317@ ^@@^F261376^?^@^@^A^?^@^@^A244377^W302^\241265261:2004eP^Q^A^376^\^@^@212/VS3V^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(^W260@ ^@@^F%^^^?^@^@^A^?^@^@^A^W?377:2004e^\241265262P^Q^Ac376^\^@^@212/VSgV^K^@6^@^@^@6^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^H^@E^@^@(212320@ ^@@^F261375^?^@^@^A^?^@^@^A244377^W302^\241265262:2004fP^P^A^376^\^@^@ lines 57-84/84 (END) On Tue, Apr 22, 2014 at 2:25 PM, Lukas Tribus wrote: > > Hello Lukas, > > > > I have just checked and found nothing > > > > # tcpdump -vv port 6082 > > tcpdump: listening on eth0, link-type EN10MB (Ethernet), capture size > > 65535 bytes > > Fix your capture. > > - capture the traffic on loopback, not eth0 > - don't truncate packets (-s 0) > - write in a cap file, tcpdump output is not very helpful > > > $ tcpdump -i lo -s0 -w tcp6082-traffic.cap port 6082 > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Apr 22 09:32:36 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 22 Apr 2014 11:32:36 +0200 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: , , , , Message-ID: > Thanks Lukas, > > here are the O/P Please post the capture file. From mdounin at mdounin.ru Tue Apr 22 09:26:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Apr 2014 13:26:21 +0400 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: Message-ID: <20140422092621.GK34696@mdounin.ru> Hello! On Tue, Apr 22, 2014 at 12:55:07PM +0530, Joydeep Bakshi wrote: > Hello all, > > My setting works well through nginx->apache but not through > nginx->varnish->apache > > apache is configured to listen to port 8080 . when nginx uses > > proxy_pass http://127.0.0.1:8080 > > the sites are running fine. > > If I introduce varnish after nginx by [proxy_pass http://127.0.0.1:6082] > the nginx starts throwing following error and browser also shows "*Zero > Sized Reply"* > > > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while reading > response header from upstream > > and /var/log/messages shows > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown > request.#012Type 'help' for more info.#012all commands are in lower-case. > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd > Cache-Control: max-age=0 > > obviously varnish is configured to listen to apache > > backend default { > .host = "127.0.0.1"; > .port = "8080"; > } > > Can anyone please suggest the possible reason which is causing the problem ? It looks like you've configured nginx to pass http to varnish CLI port. For obvious reasons this isn't going to work. -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Tue Apr 22 09:39:53 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 22 Apr 2014 14:39:53 +0500 Subject: High traffic on Nginx-Webservers !! Message-ID: Hello, We're using the cluster of 5 webservers using nginx (reverse proxy) + apache to handle php requests. Our web-servers are constantly high with load-avg of 2.0~3.0. I have seen people using varnish between nginx + apache. Could someone guide me if installing Nginx > Varnish > apache will reduce the server load ? It's urgent. Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Tue Apr 22 10:00:37 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 15:30:37 +0530 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: <20140422092621.GK34696@mdounin.ru> References: <20140422092621.GK34696@mdounin.ru> Message-ID: @Lukas - attached is the cap file @Maxim - after starting varnish only the following port comes up # netstat -nat | grep 60 tcp 0 0 0.0.0.0:6082 0.0.0.0:* LISTEN tcp 0 0 :::6082 :::* LISTEN On Tue, Apr 22, 2014 at 2:56 PM, Maxim Dounin wrote: > Hello! > > On Tue, Apr 22, 2014 at 12:55:07PM +0530, Joydeep Bakshi wrote: > > > Hello all, > > > > My setting works well through nginx->apache but not through > > nginx->varnish->apache > > > > apache is configured to listen to port 8080 . when nginx uses > > > > proxy_pass http://127.0.0.1:8080 > > > > the sites are running fine. > > > > If I introduce varnish after nginx by [proxy_pass > http://127.0.0.1:6082] > > the nginx starts throwing following error and browser also shows "*Zero > > Sized Reply"* > > > > > > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while > reading > > response header from upstream > > > > and /var/log/messages shows > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown > > request.#012Type 'help' for more info.#012all commands are in lower-case. > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd > > Cache-Control: max-age=0 > > > > obviously varnish is configured to listen to apache > > > > backend default { > > .host = "127.0.0.1"; > > .port = "8080"; > > } > > > > Can anyone please suggest the possible reason which is causing the > problem ? > > It looks like you've configured nginx to pass http to varnish CLI > port. For obvious reasons this isn't going to work. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: tcp6082-traffic.cap Type: application/octet-stream Size: 4582 bytes Desc: not available URL: From rainer at ultra-secure.de Tue Apr 22 10:17:06 2014 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 22 Apr 2014 12:17:06 +0200 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: Message-ID: <20140422121706.3dfb8c5e@suse3.ewadmin.local> Am Tue, 22 Apr 2014 14:39:53 +0500 schrieb shahzaib shahzaib : > Hello, > > We're using the cluster of 5 webservers using nginx (reverse > proxy) > + apache to handle php requests. Our web-servers are constantly high > with load-avg of 2.0~3.0. I have seen people using varnish between > nginx + apache. Could someone guide me if installing Nginx > Varnish > > apache will reduce the server load ? > > It's urgent. If the content is cachable, it will reduce load. But deploying varnish requires some experience and knowledge of the application. Unless the application is e.g. plain-vanilla wordpress, there are no out-of-the-box varnish tutorials to help you in your specific situation. If your content is cachable and you don't require varnish's cache-invalidation features, you could use nginx's proxy-caching features. See the wiki/handbook. From mdounin at mdounin.ru Tue Apr 22 10:21:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Apr 2014 14:21:07 +0400 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: <20140422092621.GK34696@mdounin.ru> Message-ID: <20140422102106.GO34696@mdounin.ru> Hello! On Tue, Apr 22, 2014 at 03:30:37PM +0530, Joydeep Bakshi wrote: > @Lukas - attached is the cap file > > @Maxim - after starting varnish only the following port comes up > > # netstat -nat | grep 60 > tcp 0 0 0.0.0.0:6082 0.0.0.0:* LISTEN > tcp 0 0 :::6082 :::* LISTEN Check varnish starup options. As per documentation, the "-a" argument of the varnishd is what you have to check: https://www.varnish-cache.org/docs/4.0/reference/varnishd.html https://www.varnish-cache.org/docs/4.0/tutorial/putting_varnish_on_port_80.html Anyway, it doesn't looks like something nginx-related. > > > On Tue, Apr 22, 2014 at 2:56 PM, Maxim Dounin wrote: > > > Hello! > > > > On Tue, Apr 22, 2014 at 12:55:07PM +0530, Joydeep Bakshi wrote: > > > > > Hello all, > > > > > > My setting works well through nginx->apache but not through > > > nginx->varnish->apache > > > > > > apache is configured to listen to port 8080 . when nginx uses > > > > > > proxy_pass http://127.0.0.1:8080 > > > > > > the sites are running fine. > > > > > > If I introduce varnish after nginx by [proxy_pass > > http://127.0.0.1:6082] > > > the nginx starts throwing following error and browser also shows "*Zero > > > Sized Reply"* > > > > > > > > > [error] 17147#0: *207 upstream sent no valid HTTP/1.0 header while > > reading > > > response header from upstream > > > > > > and /var/log/messages shows > > > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Wr 101 Unknown > > > request.#012Type 'help' for more info.#012all commands are in lower-case. > > > > > > varnishd[16984]: CLI telnet 127.0.0.1 42212 127.0.0.1 6082 Rd > > > Cache-Control: max-age=0 > > > > > > obviously varnish is configured to listen to apache > > > > > > backend default { > > > .host = "127.0.0.1"; > > > .port = "8080"; > > > } > > > > > > Can anyone please suggest the possible reason which is causing the > > problem ? > > > > It looks like you've configured nginx to pass http to varnish CLI > > port. For obvious reasons this isn't going to work. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Tue Apr 22 10:21:09 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 22 Apr 2014 15:21:09 +0500 Subject: High traffic on Nginx-Webservers !! In-Reply-To: <20140422121706.3dfb8c5e@suse3.ewadmin.local> References: <20140422121706.3dfb8c5e@suse3.ewadmin.local> Message-ID: Thanks for quick response, well our website is related to video streaming just like youtube. Could you provide me some guide to learn varnish for start-up ? Any suggestions will be highly appreciated. Shahzaib On Tue, Apr 22, 2014 at 3:17 PM, Rainer Duffner wrote: > Am Tue, 22 Apr 2014 14:39:53 +0500 > schrieb shahzaib shahzaib : > > > Hello, > > > > We're using the cluster of 5 webservers using nginx (reverse > > proxy) > > + apache to handle php requests. Our web-servers are constantly high > > with load-avg of 2.0~3.0. I have seen people using varnish between > > nginx + apache. Could someone guide me if installing Nginx > Varnish > > > apache will reduce the server load ? > > > > It's urgent. > > > If the content is cachable, it will reduce load. > > But deploying varnish requires some experience and knowledge of the > application. > Unless the application is e.g. plain-vanilla wordpress, there are no > out-of-the-box varnish tutorials to help you in your specific situation. > > If your content is cachable and you don't require varnish's > cache-invalidation features, you could use nginx's proxy-caching > features. > > See the wiki/handbook. > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Tue Apr 22 10:21:19 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 22 Apr 2014 12:21:19 +0200 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: , <20140422092621.GK34696@mdounin.ru>, Message-ID: > @Lukas - attached is the cap file? The request is bogus, imho. A GET request should not contain a body, it doesn't makes sense. > @Maxim - after starting varnish only the following port comes up? >? > # netstat -nat | grep 60? > tcp 0 0 0.0.0.0:6082? > 0.0.0.0:* LISTEN? > tcp 0 0 :::6082 :::* LISTEN? You are administrating and configuring this server, find out why and fix it. Lukas From shahzaib.cb at gmail.com Tue Apr 22 10:24:22 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 22 Apr 2014 15:24:22 +0500 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: <20140422121706.3dfb8c5e@suse3.ewadmin.local> Message-ID: >>If your content is cachable and you don't require varnish's cache-invalidation features, you could use nginx's proxy-caching features. Well, i want to cache application means, dynamic php pages. Will that be OK with nginx ? On Tue, Apr 22, 2014 at 3:21 PM, shahzaib shahzaib wrote: > Thanks for quick response, well our website is related to video streaming > just like youtube. Could you provide me some guide to learn varnish for > start-up ? > > Any suggestions will be highly appreciated. > > Shahzaib > > > > On Tue, Apr 22, 2014 at 3:17 PM, Rainer Duffner wrote: > >> Am Tue, 22 Apr 2014 14:39:53 +0500 >> schrieb shahzaib shahzaib : >> >> > Hello, >> > >> > We're using the cluster of 5 webservers using nginx (reverse >> > proxy) >> > + apache to handle php requests. Our web-servers are constantly high >> > with load-avg of 2.0~3.0. I have seen people using varnish between >> > nginx + apache. Could someone guide me if installing Nginx > Varnish >> > > apache will reduce the server load ? >> > >> > It's urgent. >> >> >> If the content is cachable, it will reduce load. >> >> But deploying varnish requires some experience and knowledge of the >> application. >> Unless the application is e.g. plain-vanilla wordpress, there are no >> out-of-the-box varnish tutorials to help you in your specific situation. >> >> If your content is cachable and you don't require varnish's >> cache-invalidation features, you could use nginx's proxy-caching >> features. >> >> See the wiki/handbook. >> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Apr 22 10:39:45 2014 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 22 Apr 2014 12:39:45 +0200 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: <20140422121706.3dfb8c5e@suse3.ewadmin.local> Message-ID: <20140422123945.0dd25991@suse3.ewadmin.local> Am Tue, 22 Apr 2014 15:21:09 +0500 schrieb shahzaib shahzaib : > Thanks for quick response, well our website is related to video > streaming just like youtube. Could you provide me some guide to learn > varnish for start-up ? > > Any suggestions will be highly appreciated. > > Shahzaib Do you do the streaming from nginx or from php? Documentation etc. is on https://www.varnish-cache.org/ For the improved handling of streaming, you will need the recently released v4.0. I don't know much about this, we don't do much varnish - and no streaming. From nginx-forum at nginx.us Tue Apr 22 10:40:38 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 22 Apr 2014 06:40:38 -0400 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: Message-ID: if the content is cacheable, using varnish or nginx-cache will definetely reduce load. we have a similar setup (nginx infront of apache+php) with an average of 5000 requests/second, and using nginx-cache with a cache-time of 1 minute reduced load from around 8 to 0.5 on the apache-servers, while the nginx-servers are still idleing at around 0.2 we use to nginx to cache static content as well as dynamic pages regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249437,249445#msg-249445 From shahzaib.cb at gmail.com Tue Apr 22 10:53:27 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 22 Apr 2014 15:53:27 +0500 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: Message-ID: Mex, That's a high amount of reduction in load-avg than :). Could you please refer me to some guide to start with nginx-cache ? And also it's drawbacks if i put the wrong configs ? As we're handling 18000 visitors concurrently on cluster of 5 webservers, which makes it 3600 concurrent users per server by dividing it with the 5. On Tue, Apr 22, 2014 at 3:40 PM, mex wrote: > if the content is cacheable, using varnish or nginx-cache will definetely > reduce load. > > we have a similar setup (nginx infront of apache+php) with an average of > 5000 requests/second, > and using nginx-cache with a cache-time of 1 minute reduced load from > around > 8 to > 0.5 on the apache-servers, while the nginx-servers are still idleing at > around 0.2 > > we use to nginx to cache static content as well as dynamic pages > > > > > > > regards, > > > mex > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249437,249445#msg-249445 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 22 11:56:51 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 22 Apr 2014 07:56:51 -0400 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: Message-ID: <24a678ed78327364c5caedb6c31919ce.NginxMailingListEnglish@forum.nginx.org> depending on your setup you might think about serving static content and videos directly from nginx: http://www.nginxtips.com/optimizing-nginx-for-video-sites/ anything served directly from nginx, not going to apache will boost your performance. > Mex, That's a high amount of reduction in load-avg than :). Could you > please refer me to some guide to start with nginx-cache ? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache http://wiki.nginx.org/ReverseProxyCachingExample a little more detailed: http://reviewsignal.com/blog/2013/08/29/reverse-proxy-and-cache-server-with-nginx/ > And also it's > drawbacks if i put the wrong configs ? stale content, but it might be ok to run with a very short cache-time Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249437,249447#msg-249447 From nginx-forum at nginx.us Tue Apr 22 12:05:35 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 22 Apr 2014 08:05:35 -0400 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: Message-ID: <06f7bc01a811b78cc2234e2948f03fc9.NginxMailingListEnglish@forum.nginx.org> PONG Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249437,249450#msg-249450 From nginx-forum at nginx.us Tue Apr 22 12:47:42 2014 From: nginx-forum at nginx.us (arnas) Date: Tue, 22 Apr 2014 08:47:42 -0400 Subject: _GET parameters with question and ampersand In-Reply-To: <49dba52c8c1a71a8800f26993ce62fee.NginxMailingListEnglish@forum.nginx.org> References: <49dba52c8c1a71a8800f26993ce62fee.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ping? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249267,249452#msg-249452 From joydeep.bakshi at netzrezepte.de Tue Apr 22 13:08:33 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 22 Apr 2014 18:38:33 +0530 Subject: nginx reports [upstream sent no valid HTTP/1.0 header] when used with varnish In-Reply-To: References: <20140422092621.GK34696@mdounin.ru> Message-ID: Dear all, Problem Solved. Here is the steps required to fix it on opensuse 13.1 varnish listen to port 80 as default in opensuse and there is no port 6081 . Hence /etc/sysconfig/varnish has to be edited to add "-a :6081" like below VARNISHD_PARAMS="-f /etc/varnish/vcl.conf -a:6081 -T:6082 -s file,/var/cache/varnish,1M -u varnish" After restarting ; varnish provides 6081 port to be used with proxy_pass Thanks for the responses On Tue, Apr 22, 2014 at 3:51 PM, Lukas Tribus wrote: > > @Lukas - attached is the cap file > > The request is bogus, imho. A GET request should not contain a body, it > doesn't > makes sense. > > > > > @Maxim - after starting varnish only the following port comes up > > > > # netstat -nat | grep 60 > > tcp 0 0 0.0.0.0:6082 > > 0.0.0.0:* LISTEN > > tcp 0 0 :::6082 :::* LISTEN > > You are administrating and configuring this server, find out why and fix > it. > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 22 14:01:43 2014 From: nginx-forum at nginx.us (arnas) Date: Tue, 22 Apr 2014 10:01:43 -0400 Subject: _GET parameters with question and ampersand In-Reply-To: References: <49dba52c8c1a71a8800f26993ce62fee.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1ce5db65ebe36891204caaea7c423339.NginxMailingListEnglish@forum.nginx.org> I was able to solve it with correct arguments passing location / { try_files $uri $uri$is_args$args /index.php?subdomain=$subdomain&content=$uri&$args; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249267,249454#msg-249454 From sherlockhugo at gmail.com Tue Apr 22 14:41:04 2014 From: sherlockhugo at gmail.com (Raul Hugo) Date: Tue, 22 Apr 2014 09:41:04 -0500 Subject: High traffic on Nginx-Webservers !! In-Reply-To: <06f7bc01a811b78cc2234e2948f03fc9.NginxMailingListEnglish@forum.nginx.org> References: <06f7bc01a811b78cc2234e2948f03fc9.NginxMailingListEnglish@forum.nginx.org> Message-ID: Shahzaib, you have a different server for the static content like the videos? I'm talking about AWS cloudfront or something like that.. 2014-04-22 7:05 GMT-05:00 mex : > PONG > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249437,249450#msg-249450 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Un abrazo! *Ra?l Hugo * *Miembro Asociadohttp://apesol.org.pe SysAdmin Cel. #961-710-096 Linux Registered User #482081 - http://counter.li.org/ P Antes de imprimir este e-mail piense bien si es necesario hacerlo* -------------- next part -------------- An HTML attachment was scrubbed... URL: From etienne.champetier at free.fr Tue Apr 22 15:07:06 2014 From: etienne.champetier at free.fr (etienne.champetier at free.fr) Date: Tue, 22 Apr 2014 17:07:06 +0200 (CEST) Subject: X-Real-Proto ? In-Reply-To: <54416D6A-834A-48DA-AF1A-478AC453A844@sysoev.ru> Message-ID: <1742817018.103794598.1398179226362.JavaMail.root@zimbra65-e11.priv.proxad.net> hi ----- Mail original ----- > De: "Igor Sysoev" > ?: nginx at nginx.org > Envoy?: Lundi 21 Avril 2014 13:56:41 > Objet: Re: X-Real-Proto ? > > On Apr 21, 2014, at 15:47 , Igor Sysoev wrote: > > > On Apr 21, 2014, at 15:03 , etienne.champetier at free.fr wrote: > > > >> Hi, > >> > >> There is ngx_http_realip_module to have the real ip in nginx when > >> you are behind a load balancer > >> If the load balancer is also terminating the ssl, and connecting > >> to nginx with http, how to set the real proto ? > > > > proxy_set_header X-Real-Proto $scheme; > > > > http://nginx.org/r/$scheme > > Sorry, misread your question. A load balancer can set any protocol > name in any header, > but it is impossible to switch to this protocol at this phase. > I don't want to switch protocol, I just want to change the content of '$scheme', '$https' like with ngx_http_realip_module, I want: set_real_proto_from (like set_real_ip_from) real_proto_header (like real_ip_header) Is this "easy" to implement, or does this need big changes in nginx code? What's the best way to do something similar (if, map, ...)? Thanks for your quick answers, and thanks in advance. Etienne > > -- > Igor Sysoev > http://nginx.com From vbart at nginx.com Tue Apr 22 15:58:53 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 22 Apr 2014 19:58:53 +0400 Subject: High traffic on Nginx-Webservers !! In-Reply-To: References: Message-ID: <1750580.OUGvvnJKvh@vbart-workstation> On Tuesday 22 April 2014 15:53:27 shahzaib shahzaib wrote: > Mex, That's a high amount of reduction in load-avg than :). Could you > please refer me to some guide to start with nginx-cache ? And also it's > drawbacks if i put the wrong configs ? > http://nginx.com/resources/admin-guide/caching wbr, Valentin V. Bartenev From bletofarine at gmail.com Tue Apr 22 16:13:54 2014 From: bletofarine at gmail.com (Florian Le Goff) Date: Tue, 22 Apr 2014 18:13:54 +0200 Subject: using ssl_crl with CRLs (plural) Message-ID: Hi there, I am trying to setup a x509 client cert check with Nginx. Everything is running smoothly until I add the ssl_crl directive. Unfortunately, my CA happens to release its CRLs under several files... for historic reasons from what I heard. With Apache/mod_ssl; the SSLCARevocationFile directive sets a concatenated PEM-encoded CA CRLs, even if concatenated files are not fully compliant with the CRL logic. Is it something that might be setup with nginx ? The ability to setup a list of the individual files somewhere in the nginx configuration would be optimal. Thanks, Ref: http://serverfault.com/questions/565445/how-to-check-multiple-crl-lists-with-nginx-client-authentication?rq=1 -- Florian Le Goff From mdounin at mdounin.ru Tue Apr 22 17:03:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Apr 2014 21:03:09 +0400 Subject: using ssl_crl with CRLs (plural) In-Reply-To: References: Message-ID: <20140422170309.GB34696@mdounin.ru> Hello! On Tue, Apr 22, 2014 at 06:13:54PM +0200, Florian Le Goff wrote: > Hi there, > > I am trying to setup a x509 client cert check with Nginx. Everything > is running smoothly until I add the ssl_crl directive. > > Unfortunately, my CA happens to release its CRLs under several > files... for historic reasons from what I heard. > > With Apache/mod_ssl; the SSLCARevocationFile directive sets a > concatenated PEM-encoded CA CRLs, even if concatenated files are not > fully compliant with the CRL logic. > > Is it something that might be setup with nginx ? The ability to setup > a list of the individual files somewhere in the nginx configuration > would be optimal. Multiple PEM-encoded CRLs concatenated into a single file should work fine. Note that both Apache/mod_ssl and nginx rely on OpenSSL to load CRL files, and handling is more or less identical. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Apr 22 18:55:36 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 22 Apr 2014 14:55:36 -0400 Subject: nginx proxy for syncml Message-ID: <15b0781fcbe04b0c4f5602b2ffd9c92c.NginxMailingListEnglish@forum.nginx.org> Next to imap and other specific buildin nginx proxies could any of them be compatible with syncml ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249462,249462#msg-249462 From rajanvaradhu at gmail.com Wed Apr 23 08:02:57 2014 From: rajanvaradhu at gmail.com (Varadharajan S) Date: Wed, 23 Apr 2014 13:32:57 +0530 Subject: Req help convert apachi2 config to nginx Message-ID: Hi, In my organization, we are using Apache2 for serving diff web applications such as mediawiki,moodle,redmine,svn,etc.Now we are decided to use Nginx for the performance.but converting is a big head ache and even searched in google and received lots of scripts (apache2nginx,...) but finally no luck. *My environment is:* Ubuntu-12.04 LTS Nginx - 1.4.7-1+precise0 Note: Pls find enclosed one sample mediawiki apache2 virtual host file, can you anybody help me to convert to nginx config file.if i get an idea, so that i can try other config files. such as, 1).what are the modules need to be install and how to ? 2).Other settings such as php related settings.....? Regards, Varad -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mediawiki.conf Type: application/octet-stream Size: 1859 bytes Desc: not available URL: From mdounin at mdounin.ru Wed Apr 23 10:44:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Apr 2014 14:44:07 +0400 Subject: Req help convert apachi2 config to nginx In-Reply-To: References: Message-ID: <20140423104407.GD34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 01:32:57PM +0530, Varadharajan S wrote: > Hi, > > In my organization, we are using Apache2 for serving diff web applications > such as mediawiki,moodle,redmine,svn,etc.Now we are decided to use Nginx > for the performance.but converting is a big head ache and even searched in > google and received lots of scripts (apache2nginx,...) but finally no luck. > > *My environment is:* > > Ubuntu-12.04 LTS > Nginx - 1.4.7-1+precise0 > > Note: Pls find enclosed one sample mediawiki apache2 virtual host file, can > you anybody help me to convert to nginx config file.if i get an idea, so > that i can try other config files. such as, > > 1).what are the modules need to be install and how to ? > 2).Other settings such as php related settings.....? This howto article may be a good starting point: http://nginx.org/en/docs/http/converting_rewrite_rules.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Apr 23 12:07:42 2014 From: nginx-forum at nginx.us (beatnut) Date: Wed, 23 Apr 2014 08:07:42 -0400 Subject: map module - mass hosting Message-ID: Hello all, I'd like to use map module for my vhost configuration to setup user name in root or fastcgi_pass parameter. At this point i've 300 domains configured and my config look like this: http { server { .......... root /home/someuser; location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/www/someuser/fpm.socket; include fastcgi_params.conf; } ............ } } I'd like to replace this model by using map module like this http { #map with about 300 domains map $http_host $username { example.com someuser; escample2.com someuser2; ....... } server { .......... root /home/$username; include fastcgi.conf; ............ } } fastcgi.conf: location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/www/$username/fpm.socket; include fastcgi.conf; } My question is - Is this a good idea to use map like this ? Every request needs to find out username by $http_host searching through few hundreds of domains. Maybe i'm wrong but it can slow down request processing significantly. Any suggestions ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249475,249475#msg-249475 From mdounin at mdounin.ru Wed Apr 23 12:44:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Apr 2014 16:44:59 +0400 Subject: map module - mass hosting In-Reply-To: References: Message-ID: <20140423124459.GH34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 08:07:42AM -0400, beatnut wrote: > I'd like to use map module for my vhost configuration to setup user name in > root or fastcgi_pass parameter. > > At this point i've 300 domains configured and my config look like this: > > http > { > server { > .......... > root /home/someuser; [...] > I'd like to replace this model by using map module like this > > > http > { > > #map with about 300 domains > > map $http_host $username { > example.com someuser; > escample2.com someuser2; > ....... > } > > server { > .......... > root /home/$username; [...] > My question is - Is this a good idea to use map like this ? Every request > needs to find out username by $http_host searching through few hundreds of > domains. Maybe i'm wrong but it can slow down request processing > significantly. > Any suggestions ? Searching within a map is basically identical to searching for appropriate server{} block, both use the same internal mechanism (ngx_hash). As long as you don't use regular expressions, lookup complexity is O(1). Distinct server{} blocks might be more CPU-efficient due to no need to evaluate variables and dynamically allocate memory for resulting strings on each request. On the other hand, multiple server{} blocks consume memory for each server's configuration, and map{} approach may be more effective if there are many mostly identical server{} blocks. -- Maxim Dounin http://nginx.org/ From rajanvaradhu at gmail.com Wed Apr 23 12:55:31 2014 From: rajanvaradhu at gmail.com (Varadharajan S) Date: Wed, 23 Apr 2014 18:25:31 +0530 Subject: Req help convert apachi2 config to nginx In-Reply-To: <20140423104407.GD34696@mdounin.ru> References: <20140423104407.GD34696@mdounin.ru> Message-ID: Hi, Thanks for reply.this won't help my requirement.can you provide some other, relevant as per my request ? Regards, Varad On Wed, Apr 23, 2014 at 4:14 PM, Maxim Dounin wrote: > Hello! > > On Wed, Apr 23, 2014 at 01:32:57PM +0530, Varadharajan S wrote: > > > Hi, > > > > In my organization, we are using Apache2 for serving diff web > applications > > such as mediawiki,moodle,redmine,svn,etc.Now we are decided to use Nginx > > for the performance.but converting is a big head ache and even searched > in > > google and received lots of scripts (apache2nginx,...) but finally no > luck. > > > > *My environment is:* > > > > Ubuntu-12.04 LTS > > Nginx - 1.4.7-1+precise0 > > > > Note: Pls find enclosed one sample mediawiki apache2 virtual host file, > can > > you anybody help me to convert to nginx config file.if i get an idea, so > > that i can try other config files. such as, > > > > 1).what are the modules need to be install and how to ? > > 2).Other settings such as php related settings.....? > > This howto article may be a good starting point: > > http://nginx.org/en/docs/http/converting_rewrite_rules.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 23 13:27:33 2014 From: nginx-forum at nginx.us (beatnut) Date: Wed, 23 Apr 2014 09:27:33 -0400 Subject: map module - mass hosting In-Reply-To: <20140423124459.GH34696@mdounin.ru> References: <20140423124459.GH34696@mdounin.ru> Message-ID: <7ac146715c198ece708fd221c44f7e9b.NginxMailingListEnglish@forum.nginx.org> Thank You for answer. I've additional questions. Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Apr 23, 2014 at 08:07:42AM -0400, beatnut wrote: > > > I'd like to use map module for my vhost configuration to setup user > name in > > root or fastcgi_pass parameter. > > > > At this point i've 300 domains configured and my config look like > this: > > > > http > > { > > server { > > .......... > > root /home/someuser; > > [...] > > > I'd like to replace this model by using map module like this > > > > > > http > > { > > > > #map with about 300 domains > > > > map $http_host $username { > > example.com someuser; > > escample2.com someuser2; > > ....... > > } > > > > server { > > .......... > > root /home/$username; > > [...] > > > My question is - Is this a good idea to use map like this ? Every > request > > needs to find out username by $http_host searching through few > hundreds of > > domains. Maybe i'm wrong but it can slow down request processing > > significantly. > > Any suggestions ? > > Searching within a map is basically identical to searching for > appropriate server{} block, both use the same internal mechanism > (ngx_hash). As long as you don't use regular expressions, > lookup complexity is O(1). So using for example: .example.com or example.* have more complexity or it shoud have full list of subdomains for better performance: www.example.com example.com example.somedomain.com > Distinct server{} blocks might be more CPU-efficient due to no need to > > evaluate variables and dynamically allocate memory for resulting > strings on each request. My configuration include one file with server{} per domain. exaple.com.conf example2.conf etc The main improvement i'd like to implement is to have one file with php config like fastcgi.conf above and then include it in every server{} Map module gives me this opportunity. > On the other hand, multiple server{} blocks consume memory for > each server's configuration, and map{} approach may be more > effective if there are many mostly identical server{} blocks. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249475,249481#msg-249481 From mdounin at mdounin.ru Wed Apr 23 13:35:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Apr 2014 17:35:39 +0400 Subject: Req help convert apachi2 config to nginx In-Reply-To: References: <20140423104407.GD34696@mdounin.ru> Message-ID: <20140423133538.GI34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 06:25:31PM +0530, Varadharajan S wrote: > Thanks for reply.this won't help my requirement.can you provide some other, > relevant as per my request ? The relevant idea, if you didn't get it, is a follows: Instead of trying to "convert" something from Apache to nginx, understand what the Apache configuration does, and then write an nginx configuration to do this. In many cases, it might also be a good idea to don't "convert" at all, but rather use both Apache and nginx. By using nginx as a proxy (accelerator, cache, load balancer, to server static files on some domains/locations) get most of the performance benefits, while preserving Apache as a well known application server (as well as preserving some Apache-only things like mod_svn). -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Apr 23 14:34:54 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Apr 2014 18:34:54 +0400 Subject: map module - mass hosting In-Reply-To: <7ac146715c198ece708fd221c44f7e9b.NginxMailingListEnglish@forum.nginx.org> References: <20140423124459.GH34696@mdounin.ru> <7ac146715c198ece708fd221c44f7e9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140423143454.GK34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 09:27:33AM -0400, beatnut wrote: [...] > > Searching within a map is basically identical to searching for > > appropriate server{} block, both use the same internal mechanism > > (ngx_hash). As long as you don't use regular expressions, > > lookup complexity is O(1). > > So using for example: > > .example.com > or > example.* > > have more complexity or it shoud have full list of subdomains for better > performance: > > www.example.com > example.com > example.somedomain.com While wildcards require more work on each lookup, complexity is still O(1). Note that regular expressions != wildcard names. > > Distinct server{} blocks might be more CPU-efficient due to no need to > > > > evaluate variables and dynamically allocate memory for resulting > > strings on each request. > > My configuration include one file with server{} per domain. > exaple.com.conf > example2.conf > etc > > The main improvement i'd like to implement is to have one file with php > config like fastcgi.conf above and then include it in every server{} > Map module gives me this opportunity. This is not something I would recommend to do. If you have server{} block per domain, you should have enough data to write configuration without introducing another map ($document_root, $server_name, and so on). Please also see this FAQ article: http://nginx.org/en/docs/faq/variables_in_config.html -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Wed Apr 23 15:34:10 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 23 Apr 2014 17:34:10 +0200 Subject: Old topic ssl private key with passphrase Message-ID: Dear nginx developers. What is necessary that you take hands on the topic 'private key passphrase'? e.g.: http://trac.nginx.org/nginx/ticket/433 [ ] donation [ ] time [ ] leasure [ ] other: ...... Maybe not as much options as in apache httpd https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslpassphrasedialog but at least one. I found this entry in the ml from 2012, is this a possible solution for nginx OSS core? http://marc.info/?t=131494347400003&r=1&w=2 Maybe you can start again a nginx deployments survey as in 01.2013, to see what a year later the new or old goals of the nginx community is. http://mailman.nginx.org/pipermail/nginx/2013-January/037113.html Best regards Aleks From dmiller at metheus.org Wed Apr 23 15:39:02 2014 From: dmiller at metheus.org (David Miller) Date: Wed, 23 Apr 2014 11:39:02 -0400 Subject: Problems with large object sets? Message-ID: <2023ACCE-B05B-4798-AEC2-99192EA11EE7@metheus.org> I?m having trouble with an nginx setup built to serve search engines. Based on the user agent, all bots are served only from cache. We populate the cache with our own set of spiders so we can control the overall load. Total cache size is ~450 GB in ~12 million files. The problem is that about 1/3 of the requests coming in live from the bots are misses, even though the requested page was requested by our spider a mere hour previously. Configured limits should be safe: proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:2500m max_size=800000m inactive=800h; Where should I be looking for why these requests were misses? Thanks, ? David -------------- next part -------------- An HTML attachment was scrubbed... URL: From dmiller at metheus.org Wed Apr 23 15:43:10 2014 From: dmiller at metheus.org (David Miller) Date: Wed, 23 Apr 2014 11:43:10 -0400 Subject: hashing utility? Message-ID: <899D3890-E0E3-4DC9-985E-EEC0002A5B0D@metheus.org> Does anyone have a command line utility to hash a URL to an nginx object name? I?d like to know what file should be the cached version of a page. TIA, ? David From vbart at nginx.com Wed Apr 23 16:06:13 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 23 Apr 2014 20:06:13 +0400 Subject: Old topic ssl private key with passphrase In-Reply-To: References: Message-ID: <2915913.f91y8T63rB@vbart-workstation> On Wednesday 23 April 2014 17:34:10 Aleksandar Lazic wrote: [..] > > Maybe you can start again a nginx deployments survey as in 01.2013, > to see what a year later the new or old goals of the nginx community is. > > http://mailman.nginx.org/pipermail/nginx/2013-January/037113.html > There is one already started: http://nginx.com/blog/think-nginx/ wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Apr 23 16:19:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Apr 2014 20:19:04 +0400 Subject: Old topic ssl private key with passphrase In-Reply-To: References: Message-ID: <20140423161904.GL34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 05:34:10PM +0200, Aleksandar Lazic wrote: > Dear nginx developers. > > What is necessary that you take hands on the topic 'private key passphrase'? > > e.g.: http://trac.nginx.org/nginx/ticket/433 > > [ ] donation > [ ] time > [ ] leasure > [ ] other: ...... > > Maybe not as much options as in apache httpd > > https://httpd.apache.org/docs/2.4/mod/mod_ssl.html#sslpassphrasedialog > > but at least one. Igor explained his position on this more than once: unless you are actually using something external to enter key passwords, there is no difference with unencrypted keys from security point of view (assuming proper access rights are used for keys). And as far as we know, no or almost no users of Apache's SSLPassPhraseDialog use it this way, most just use "echo 'password'" or something like. So the question is: why do you need it? (I'm aware of at least one more or less valid answer which almost convinced me that we should add it, but it's not about security, but rather about social engineering.) > I found this entry in the ml from 2012, is this a possible solution for > nginx OSS core? > > http://marc.info/?t=131494347400003&r=1&w=2 No. -- Maxim Dounin http://nginx.org/ From sarah at nginx.com Wed Apr 23 16:32:54 2014 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 23 Apr 2014 09:32:54 -0700 Subject: NGINX 2014 survey: I know you have opinions. Message-ID: Hello! As Valentin mentioned in another thread, it?s that time of year again when we want to tune up our strategy; see how NGINX is used; what you the community thinks of us; where we can improve our products, communications or community; and so on. Please take a moment and fill out this survey[1] and let us know how we can be more valuable in your organization?s future. Happy spring. Sarah (on behalf of the Nginx team globally.) [1] https://www.surveymonkey.com/s/L5B6MVH From al-nginx at none.at Wed Apr 23 18:26:05 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 23 Apr 2014 20:26:05 +0200 Subject: Old topic ssl private key with passphrase In-Reply-To: <2915913.f91y8T63rB@vbart-workstation> References: <2915913.f91y8T63rB@vbart-workstation> Message-ID: <5d9aa5d23766436c505d1b2ea2c87c34@none.at> Am 23-04-2014 18:06, schrieb Valentin V. Bartenev: > On Wednesday 23 April 2014 17:34:10 Aleksandar Lazic wrote: > [..] >> >> Maybe you can start again a nginx deployments survey as in 01.2013, >> to see what a year later the new or old goals of the nginx community >> is. >> >> http://mailman.nginx.org/pipermail/nginx/2013-January/037113.html >> > > There is one already started: > http://nginx.com/blog/think-nginx/ Sorry how could I missed this ;-) BR Aleks From al-nginx at none.at Wed Apr 23 18:32:57 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 23 Apr 2014 20:32:57 +0200 Subject: Old topic ssl private key with passphrase In-Reply-To: <20140423161904.GL34696@mdounin.ru> References: <20140423161904.GL34696@mdounin.ru> Message-ID: <7b805e57dedca28a14f872763de8b98d@none.at> Hi. Am 23-04-2014 18:19, schrieb Maxim Dounin: > Hello! > > On Wed, Apr 23, 2014 at 05:34:10PM +0200, Aleksandar Lazic wrote: > >> Dear nginx developers. >> >> What is necessary that you take hands on the topic 'private key >> passphrase'? [snipp] > Igor explained his position on this more than once: unless you are > actually using something external to enter key passwords, there is no > difference with unencrypted keys from security point of view > (assuming proper access rights are used for keys). And as far as > we know, no or almost no users of Apache's SSLPassPhraseDialog use > it this way, most just use "echo 'password'" or something like. Full ack ;-/ I also agree that this is a very hard task. > So the question is: why do you need it? If you want to get a specific certificate for some standars. > (I'm aware of at least one more or less valid answer which almost > convinced me that we should add it, but it's not about security, > but rather about social engineering.) Maybe some standards could be a valid reason. https://en.wikipedia.org/wiki/PCI_DSS https://www.pcisecuritystandards.org/pdfs/pci_ssc_quick_guide.pdf e. g. #### 8.2 Employ at least one of these to authenticate all users: password or passphrase; or two-factor authentication (e.g., token devices, smart cards, biometrics, public keys). #### BR Aleks From prevost at adobe.com Wed Apr 23 18:35:05 2014 From: prevost at adobe.com (Edward Prevost) Date: Wed, 23 Apr 2014 18:35:05 +0000 Subject: Naxsi Rules Bug Message-ID: <53c1a216cb1a490e804f32c1ae85494f@BLUPR02MB374.namprd02.prod.outlook.com> NginX Homies, It appears that the $URL:/ construct isn't working. in a set of test scripts we are running we noticed this. Original Rule, placed within the http block in the configuration file. This rule rejected all requests, regardless of match or not. MainRule "rx:\"" "msg:Filtering key_one variable" "mz:$URL:/validate-bad-key-variable|$ARGS_VAR:key_one|$HEADERS_VAR:X-Key-One " "s:$INVALID_KEY:8" id:1666; Modified Rule after binary testing to find the culprit, placed within the http block in the configuration file. This rule functioned as expected, but was not limited in it's URL scope as desired. MainRule "rx:\"" "msg:Filtering key_one variable" "mz:URL|$ARGS_VAR:key_one|$HEADERS_VAR:X-Key-One" "s:$INVALID_KEY:8" id:1666; Has anyone else encountered this bug? Thanks, Ed Description: Description: adobe_logo_web Edward Prevost Platform Security Architect Adobe Systems v. 668230 p. 408.536.6823 m. 509.254.7690 @EdwardPrevost 345 Park Ave San Jose, CA 95011 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 2385 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5496 bytes Desc: not available URL: From nginx-forum at nginx.us Wed Apr 23 20:24:10 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 23 Apr 2014 16:24:10 -0400 Subject: Naxsi Rules Bug In-Reply-To: <53c1a216cb1a490e804f32c1ae85494f@BLUPR02MB374.namprd02.prod.outlook.com> References: <53c1a216cb1a490e804f32c1ae85494f@BLUPR02MB374.namprd02.prod.outlook.com> Message-ID: And the problem is ?? you might get better support for naxsi here; https://groups.google.com/forum/#!forum/naxsi-discuss Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249499,249501#msg-249501 From prevost at adobe.com Wed Apr 23 20:28:08 2014 From: prevost at adobe.com (Edward Prevost) Date: Wed, 23 Apr 2014 20:28:08 +0000 Subject: Naxsi Rules Bug In-Reply-To: References: <53c1a216cb1a490e804f32c1ae85494f@BLUPR02MB374.namprd02.prod.outlook.com> Message-ID: <96af919cc7ae4bf8a3e582bc259b939d@BLUPR02MB374.namprd02.prod.outlook.com> The problem is the $URL:/ syntatx isn't working. Thanks, I move it to that forum. Edward Prevost | Platform Security | Adobe | v. 668230 | p. 408.536.6823 | m. 509.254.7690 | @EdwardPrevost -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of itpp2012 Sent: Wednesday, April 23, 2014 1:24 PM To: nginx at nginx.org Subject: Re: Naxsi Rules Bug And the problem is ?? you might get better support for naxsi here; https://groups.google.com/forum/#!forum/naxsi-discuss Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249499,249501#msg-249501 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5496 bytes Desc: not available URL: From reallfqq-nginx at yahoo.fr Wed Apr 23 21:03:41 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 23 Apr 2014 23:03:41 +0200 Subject: Old topic ssl private key with passphrase In-Reply-To: <7b805e57dedca28a14f872763de8b98d@none.at> References: <20140423161904.GL34696@mdounin.ru> <7b805e57dedca28a14f872763de8b98d@none.at> Message-ID: Igor and Maxim positions, I suppose, are based on the fact that, unless using an external system to authenticate the user of a certificate, storing both certificate + passphrase on thel same system, accessed by the same user (the one running nginx which loads the certificate and needs to decrypt it) has the same level of security that dealing with an unencrypted certificate and provide a false sense of securilty. Isolation of independent parts of a security system is a very basic notion of security based on common sense. The standards you quote are based on those. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From JHect at shieldsbag.com Wed Apr 23 21:58:48 2014 From: JHect at shieldsbag.com (Hect, Jason) Date: Wed, 23 Apr 2014 21:58:48 +0000 Subject: Disable Reverse Proxy for Failover Message-ID: <007B4FA3B183D546B24115FE6CA15B011D4C9EB6@EXCHANGE.shields.com> I have NGINX set up as a reverse proxy, which works fine. In the event my web server goes down, I would like NGINX to act as the failover, and serve a local page with just some basic company information and a "check back soon" type message. Trying to do this, I think I'm getting stuck in an infinite loop and always end up with a 502 Gateway error. For testing, I was just trying to get it working with generic load balancing, going back and forth between my web server and the local NGINX "Check Back Soon" page. My config looks something like this: upstream Server_Test { server 123.456.789.001; server 127.0.0.1; } server { listen 123.456.789.002; server_name www.test.com; location / { proxy_pass http://Server_Test; proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; proxy_redirect off; proxy_buffering off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } server { listen 127.0.0.1; server_name www.test.com; location / { root html; index index.html index.htm; } } Any reason why this shouldn't work? Thanks! Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From yanghq at neusoft.com Thu Apr 24 03:04:05 2014 From: yanghq at neusoft.com (yanghq) Date: Thu, 24 Apr 2014 11:04:05 +0800 Subject: help sendmsg() failed in error log Message-ID: <53587F25.3040005@neusoft.com> hello when test my reverse proxy server, I found "sendmsg() failed (9: Bad file descriptor) while reading response header from upstream" in error.log. Is there any clue about it? --------------------------------------------------------------------------------------------------- Confidentiality Notice: The information contained in this e-mail and any accompanying attachment(s) is intended only for the use of the intended recipient and may be confidential and/or privileged of Neusoft Corporation, its subsidiaries and/or its affiliates. If any reader of this communication is not the intended recipient, unauthorized use, forwarding, printing, storing, disclosure or copying is strictly prohibited, and may be unlawful.If you have received this communication in error,please immediately notify the sender by return e-mail, and delete the original message and all copies from your system. Thank you. --------------------------------------------------------------------------------------------------- From source.rar at gmail.com Thu Apr 24 06:49:31 2014 From: source.rar at gmail.com (Anselm Meyn) Date: Thu, 24 Apr 2014 09:49:31 +0300 Subject: Pass filename and type to backend server Message-ID: Hi, I am trying to upload a file to an nginx server and then have it passed on to the backend server after upload completes. I am able to set it up as described here (https://coderwall.com/p/swgfvw) and see that the file is being uploaded. However I am not able to get the file name and type to my backend server. Could someone please tell me if I am missing something here? Any help is much appreciated. BTW this question is also at https://superuser.com/questions/745300/nginx-pass-filetype-to-backend-server . Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 24 06:51:51 2014 From: nginx-forum at nginx.us (beatnut) Date: Thu, 24 Apr 2014 02:51:51 -0400 Subject: map module - mass hosting In-Reply-To: <20140423143454.GK34696@mdounin.ru> References: <20140423143454.GK34696@mdounin.ru> Message-ID: <05e1d52f66fd91b7f8bc1e0288c0c209.NginxMailingListEnglish@forum.nginx.org> Thank You for explanation and advise. Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Apr 23, 2014 at 09:27:33AM -0400, beatnut wrote: > > [...] > > > > Searching within a map is basically identical to searching for > > > appropriate server{} block, both use the same internal mechanism > > > (ngx_hash). As long as you don't use regular expressions, > > > lookup complexity is O(1). > > > > So using for example: > > > > .example.com > > or > > example.* > > > > have more complexity or it shoud have full list of subdomains for > better > > performance: > > > > www.example.com > > example.com > > example.somedomain.com > > While wildcards require more work on each lookup, complexity is > still O(1). Note that regular expressions != wildcard names. > > > > Distinct server{} blocks might be more CPU-efficient due to no > need to > > > > > > evaluate variables and dynamically allocate memory for resulting > > > strings on each request. > > > > My configuration include one file with server{} per domain. > > exaple.com.conf > > example2.conf > > etc > > > > The main improvement i'd like to implement is to have one file with > php > > config like fastcgi.conf above and then include it in every > server{} > > Map module gives me this opportunity. > > This is not something I would recommend to do. If you have > server{} block per domain, you should have enough data to write > configuration without introducing another map ($document_root, > $server_name, and so on). > > Please also see this FAQ article: > > http://nginx.org/en/docs/faq/variables_in_config.html > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249475,249513#msg-249513 From vbart at nginx.com Thu Apr 24 08:35:25 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 24 Apr 2014 12:35:25 +0400 Subject: help sendmsg() failed in error log In-Reply-To: <53587F25.3040005@neusoft.com> References: <53587F25.3040005@neusoft.com> Message-ID: <3138098.bAVeNRL2Vt@vbart-workstation> On Thursday 24 April 2014 11:04:05 yanghq wrote: > hello > > when test my reverse proxy server, I found "sendmsg() failed (9: > Bad file descriptor) while reading response header from upstream" in > error.log. > > Is there any clue about it? Since nginx doesn't use sendmsg() for upstream servers, it's very likely that the clue is somewhere around 3rd-party modules. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu Apr 24 08:54:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Apr 2014 12:54:56 +0400 Subject: Old topic ssl private key with passphrase In-Reply-To: <7b805e57dedca28a14f872763de8b98d@none.at> References: <20140423161904.GL34696@mdounin.ru> <7b805e57dedca28a14f872763de8b98d@none.at> Message-ID: <20140424085455.GN34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 08:32:57PM +0200, Aleksandar Lazic wrote: > Hi. > > Am 23-04-2014 18:19, schrieb Maxim Dounin: > >Hello! > > > >On Wed, Apr 23, 2014 at 05:34:10PM +0200, Aleksandar Lazic wrote: > > > >>Dear nginx developers. > >> > >>What is necessary that you take hands on the topic 'private key > >>passphrase'? > > [snipp] > > >Igor explained his position on this more than once: unless you are > >actually using something external to enter key passwords, there is no > >difference with unencrypted keys from security point of view > >(assuming proper access rights are used for keys). And as far as > >we know, no or almost no users of Apache's SSLPassPhraseDialog use > >it this way, most just use "echo 'password'" or something like. > > Full ack ;-/ > > I also agree that this is a very hard task. > > >So the question is: why do you need it? > > If you want to get a specific certificate for some standars. Well, that's not about security either, and completely non-technical. I've seen "certifications" requiring to use software with known remote code execution vulnerabilities, and I'm quite sceptical about doing something just because of certification requirements, without understanding the reasons behind them (if any). Anyway, if you know a standard which requires storing of keys in password-protected forms only - please point it out. > >(I'm aware of at least one more or less valid answer which almost > >convinced me that we should add it, but it's not about security, > >but rather about social engineering.) > > Maybe some standards could be a valid reason. > > https://en.wikipedia.org/wiki/PCI_DSS > > https://www.pcisecuritystandards.org/pdfs/pci_ssc_quick_guide.pdf > > e. g. > > #### > 8.2 > Employ at least one of these to authenticate all users: password or > passphrase; or two-factor > authentication (e.g., token devices, smart cards, biometrics, public keys). > #### This doesn't look related at all. It's about authentication of users, not about storage of private keys. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Apr 24 10:26:37 2014 From: nginx-forum at nginx.us (Lintu) Date: Thu, 24 Apr 2014 06:26:37 -0400 Subject: worker process exited on signal 11 Message-ID: Today is the third time within the last week that the nginx server crashed down, causing the website to only return "502 Bad Gateway" errors. In the error log, I can find the following line: 2014/04/24 05:11:53 [alert] 32094#0: worker process 32095 exited on signal 11 I have searched for other users having the same error message, but usually it was caused by some custom plugin they installed. I don't use any plugins (by that I mean I use debian's stable repository of the "nginx-full" package. I'm not sure if they deliver some custom plugins by default). What could be causing the issue for me? #nginx -v nginx version: nginx/1.2.1 #nginx -V nginx version: nginx/1.2.1 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-pcre-jit --with-debug --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-auth-pam --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-echo --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upstream-fair --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-dav-ext-module #uname -a Linux [hostname removed] 3.2.0-4-amd64 #1 SMP Debian 3.2.54-2 x86_64 GNU/Linux nginx.conf user www-data; worker_processes 8; pid /var/run/nginx.pid; events { worker_connections 16384; # multi_accept on; } worker_rlimit_nofile 32768; http { ## # Basic Settings ## sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 5; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 64; server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_buffers 16 8k; gzip_comp_level 6; gzip_http_version 1.1; gzip_min_length 10; gzip_types text/plain text/css image/png image/gif image/jpeg application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript image/x-icon; gzip_vary on; gzip_proxied any; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; ## # Proxy Settings ## proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; include /etc/nginx/conf.d/*.conf; # there is no file in that dir include /etc/nginx/sites-enabled/*; # only some files with pretty basic server{} blocks } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249524,249524#msg-249524 From al-nginx at none.at Thu Apr 24 11:03:21 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 24 Apr 2014 13:03:21 +0200 Subject: Old topic ssl private key with passphrase In-Reply-To: <20140424085455.GN34696@mdounin.ru> References: <20140423161904.GL34696@mdounin.ru> <7b805e57dedca28a14f872763de8b98d@none.at> <20140424085455.GN34696@mdounin.ru> Message-ID: <47fcb0366c4a72a7b1e6a6322fecd2c8@none.at> Hi. Am 24-04-2014 10:54, schrieb Maxim Dounin: > Hello! > > On Wed, Apr 23, 2014 at 08:32:57PM +0200, Aleksandar Lazic wrote: > >> Hi. >> >> Am 23-04-2014 18:19, schrieb Maxim Dounin: [snipp] >> I also agree that this is a very hard task. >> >> >So the question is: why do you need it? >> >> If you want to get a specific certificate for some standars. > > Well, that's not about security either, and completely > non-technical. > > I've seen "certifications" requiring to use software with known > remote code execution vulnerabilities, and I'm quite sceptical > about doing something just because of certification requirements, > without understanding the reasons behind them (if any). > > Anyway, if you know a standard which requires storing of > keys in password-protected forms only - please point it out. Okay. BR Aleks From mdounin at mdounin.ru Thu Apr 24 11:03:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Apr 2014 15:03:59 +0400 Subject: worker process exited on signal 11 In-Reply-To: References: Message-ID: <20140424110359.GR34696@mdounin.ru> Hello! On Thu, Apr 24, 2014 at 06:26:37AM -0400, Lintu wrote: > Today is the third time within the last week that the nginx server crashed > down, causing the website to only return "502 Bad Gateway" errors. In the > error log, I can find the following line: > > 2014/04/24 05:11:53 [alert] 32094#0: worker process 32095 exited on signal > 11 > > I have searched for other users having the same error message, but usually > it was caused by some custom plugin they installed. I don't use any plugins > (by that I mean I use debian's stable repository of the "nginx-full" > package. I'm not sure if they deliver some custom plugins by default). As per "nginx -V" provided, there are at least 4 3rd party modules compiled in: > --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-auth-pam > --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-echo > --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-upstream-fair > --add-module=/tmp/buildd/nginx-1.2.1/debian/modules/nginx-dav-ext-module Besides that, you are using 1.2.1 which is a long obsolete version from a legacy 1.2.x branch. It is recommended to upgrade at least to latest stable release, see http://nginx.org/en/download.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Apr 24 11:22:08 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Apr 2014 15:22:08 +0400 Subject: Disable Reverse Proxy for Failover In-Reply-To: <007B4FA3B183D546B24115FE6CA15B011D4C9EB6@EXCHANGE.shields.com> References: <007B4FA3B183D546B24115FE6CA15B011D4C9EB6@EXCHANGE.shields.com> Message-ID: <20140424112208.GS34696@mdounin.ru> Hello! On Wed, Apr 23, 2014 at 09:58:48PM +0000, Hect, Jason wrote: > I have NGINX set up as a reverse proxy, which works fine. In > the event my web server goes down, I would like NGINX to act as > the failover, and serve a local page with just some basic > company information and a "check back soon" type message. > Trying to do this, I think I'm getting stuck in an infinite loop > and always end up with a 502 Gateway error. For testing, I was > just trying to get it working with generic load balancing, going > back and forth between my web server and the local NGINX "Check > Back Soon" page. My config looks something like this: [...] > Any reason why this shouldn't work? Looking into logs might be a good idea. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Apr 24 13:14:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Apr 2014 17:14:04 +0400 Subject: nginx-1.6.0 Message-ID: <20140424131403.GV34696@mdounin.ru> Changes with nginx 1.6.0 24 Apr 2014 *) 1.6.x stable branch. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Thu Apr 24 13:15:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Apr 2014 17:15:31 +0400 Subject: nginx-1.7.0 Message-ID: <20140424131531.GZ34696@mdounin.ru> Changes with nginx 1.7.0 24 Apr 2014 *) Feature: backend SSL certificate verification. *) Feature: support for SNI while working with SSL backends. *) Feature: the $ssl_server_name variable. *) Feature: the "if" parameter of the "access_log" directive. -- Maxim Dounin http://nginx.org/en/donation.html From JHect at shieldsbag.com Thu Apr 24 14:44:57 2014 From: JHect at shieldsbag.com (Hect, Jason) Date: Thu, 24 Apr 2014 14:44:57 +0000 Subject: Disable Reverse Proxy for Failover Message-ID: <007B4FA3B183D546B24115FE6CA15B011D4CC031@EXCHANGE.shields.com> I turned on all the logging error_log logs/error.log; error_log logs/error.log notice; error_log logs/error.log info; Nothing shows up when I try this. I just get a 502 Bad Gateway response in my browser. Thanks, Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Thu Apr 24 15:16:45 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 24 Apr 2014 11:16:45 -0400 Subject: [nginx-announce] nginx-1.6.0 In-Reply-To: <20140424131409.GW34696@mdounin.ru> References: <20140424131409.GW34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.6.0 for Windows http://goo.gl/aWPxCn (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Thu, Apr 24, 2014 at 9:14 AM, Maxim Dounin wrote: > Changes with nginx 1.6.0 24 Apr > 2014 > > *) 1.6.x stable branch. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Thu Apr 24 15:21:59 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 24 Apr 2014 11:21:59 -0400 Subject: [nginx-announce] nginx-1.7.0 In-Reply-To: <20140424131535.GA34696@mdounin.ru> References: <20140424131535.GA34696@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.0 for Windows http://goo.gl/rYXbPx (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Thu, Apr 24, 2014 at 9:15 AM, Maxim Dounin wrote: > Changes with nginx 1.7.0 24 Apr > 2014 > > *) Feature: backend SSL certificate verification. > > *) Feature: support for SNI while working with SSL backends. > > *) Feature: the $ssl_server_name variable. > > *) Feature: the "if" parameter of the "access_log" directive. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hari at cpacket.com Thu Apr 24 16:40:38 2014 From: hari at cpacket.com (Hari Miriyala) Date: Thu, 24 Apr 2014 09:40:38 -0700 Subject: Local and Remote User Authentication Message-ID: Hi All, We have nginx1.4.7 with ngx_http_auth_request_module and ngx_http_auth_basic_module besides few other modules. There are few other modules also, but have mentioned above two modules only due to relevance to this discussion. Our application requires to have local user (meaning - store user name and passwd and authenticate at our server) and remote user (meaning - delegate authentication to remote servers like Radius/TACACS+, in this case, our application is aware of only user name and which remote server to send authentication request). The goal is to configure nginx as following: 1. Configure nginx to prompt username/passwd 2. Once user enters username and passwd, get access to these fields and pass to our web application which looks at local database and decides whether user is local or remote. 3. If user is local, authenticate using ngx_http_auth_basic_module (htpasswd style) 4. If user is remote, delegate authentication to remote server using ngx_http_auth_request_module 5. Once authentication is successful (either in step 3 or step 4), pass control back to our application for some book-keeping 6. Let authenticated user access application Any suggestions how do we configure nginx to achieve above? please share your thoughts/ideas/sample configs etc. Regards, Hari -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen.ooms at stat.ucla.edu Thu Apr 24 18:07:26 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Thu, 24 Apr 2014 11:07:26 -0700 Subject: rate limit by method Message-ID: Is there any way I can impose a rate limit on a location or back-end by HTTP method? Specifically I would like to limit the number of POST requests that a single client IP can perform within a given timespan. From mdounin at mdounin.ru Thu Apr 24 18:29:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Apr 2014 22:29:26 +0400 Subject: rate limit by method In-Reply-To: References: Message-ID: <20140424182925.GM34696@mdounin.ru> Hello! On Thu, Apr 24, 2014 at 11:07:26AM -0700, Jeroen Ooms wrote: > Is there any way I can impose a rate limit on a location or back-end > by HTTP method? Specifically I would like to limit the number of POST > requests that a single client IP can perform within a given timespan. I believe more or less the same question was discussed a couple of weeks ago: http://mailman.nginx.org/pipermail/nginx/2014-April/043034.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Apr 24 19:43:47 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 24 Apr 2014 15:43:47 -0400 Subject: [ANN] Windows nginx 1.7.1.1 Snowman Message-ID: 21:28 24-4-2014 nginx 1.7.1.1 Snowman Based on nginx 1.7.1 (24-4-2014) with; + lua-upstream-nginx-module v0.1 (upgraded 24-4-2014) + Streaming with nginx-rtmp-module, v1.1.4 (upgraded 24-4-2014) + New development tree nginx export 1.7 + Naxsi WAF v0.53-1 (upgraded 17-4-2014) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249574,249574#msg-249574 From nginx-forum at nginx.us Thu Apr 24 21:29:30 2014 From: nginx-forum at nginx.us (FlappySocks) Date: Thu, 24 Apr 2014 17:29:30 -0400 Subject: Nginx Websocket proxy dropping frames Message-ID: <816942074e419d080e179e1c39da38f4.NginxMailingListEnglish@forum.nginx.org> Connecting to my websocket server directly works (Chrome or Firefox). Connecting via the Nginx websocket proxy connects, but drops frames. Here is an example of the JSON messages: <-- {"login" : { "username": "user", "password" : "pass"}} --> {"loginReply" : { "state": "ok"}} <-- {"someSetting1" : { "something": "something"}} <-- {"someSetting2" : { "something": "something"}} **DROPPED** <-- {"someSetting3" : { "something": "something"}} **DROPPED** Those last three messages are sent immediately after login, but the last two don't make it to the websocket server (~90% of the time). Subsequent messages, work fine, as if nothing was missing. I have tried Nginx 1.4.7, 1.5.13 & 1.6 location /websocket { proxy_pass http://localhost:8001; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_read_timeout 86400; } I have tried proxy_buffering off and on. Anything else I should try? The problem occurs ~30% of the time on my powerful x86 machine, and ~90% on my two less powerful ARM machines (one is a Raspberry Pi, and the other a much faster dual core). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249575,249575#msg-249575 From reallfqq-nginx at yahoo.fr Thu Apr 24 22:12:55 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 25 Apr 2014 00:12:55 +0200 Subject: Nginx Websocket proxy dropping frames In-Reply-To: <816942074e419d080e179e1c39da38f4.NginxMailingListEnglish@forum.nginx.org> References: <816942074e419d080e179e1c39da38f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Logs? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Apr 24 22:34:19 2014 From: nginx-forum at nginx.us (Lintu) Date: Thu, 24 Apr 2014 18:34:19 -0400 Subject: worker process exited on signal 11 In-Reply-To: <20140424110359.GR34696@mdounin.ru> References: <20140424110359.GR34696@mdounin.ru> Message-ID: Thanks, I'll try to get our servers upgraded in the next few days :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249524,249577#msg-249577 From jeroen.ooms at stat.ucla.edu Fri Apr 25 04:09:58 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Thu, 24 Apr 2014 21:09:58 -0700 Subject: rate limit by method In-Reply-To: <20140424182925.GM34696@mdounin.ru> References: <20140424182925.GM34696@mdounin.ru> Message-ID: On Thu, Apr 24, 2014 at 11:29 AM, Maxim Dounin wrote: > I believe more or less the same question was discussed a couple of > weeks ago: > > http://mailman.nginx.org/pipermail/nginx/2014-April/043034.html Thank you! I must have missed that one. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Apr 25 05:03:26 2014 From: nginx-forum at nginx.us (George) Date: Fri, 25 Apr 2014 01:03:26 -0400 Subject: Nginx 1.7.0 failed make with Phusion Passenger ? Message-ID: Anyone experience this problem ? I have Nginx 1.5.13 working fine with Phusion Passenger 4.0.37 source compile. But trying to update Nginx from 1.5.13 to 1.7.0 fails at make stage. I tried both Phusion Passenger 4.0.37 and 4.0.41 and it fails. Working Nginx 1.5.13 configuration nginx -V nginx version: nginx/1.5.13 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) TLS SNI support enabled configure arguments: --sbin-path=/usr/local/sbin --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_ssl_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_sub_module --with-http_addition_module --with-http_image_filter_module --with-http_secure_link_module --with-http_flv_module --with-http_realip_module --with-openssl-opt=enable-tlsext --add-module=../ngx-fancyindex-ngx-fancyindex --add-module=../ngx_cache_purge-2.1 --add-module=../headers-more-nginx-module-0.25 --add-module=../nginx-accesskey-2.0.3 --add-module=../nginx-http-concat-master --with-http_dav_module --add-module=../nginx-dav-ext-module-0.0.3 --add-module=/usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.37/ext/nginx --with-openssl=../openssl-1.0.1g --with-libatomic --with-pcre=../pcre-8.35 --with-pcre-jit --with-http_spdy_module --add-module=../ngx_pagespeed-release-1.7.30.4-beta Now when updating to Nginx 1.7.0 fails at this point with both Phusion Passenger 4.0.37 and 4.0.41 passenger -v Phusion Passenger version 4.0.41 error message -o objs/addon/nginx/StaticContentHandler.o \ /usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.41/ext/nginx/StaticContentHandler.c /usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.41/ext/nginx/StaticContentHandler.c: In function 'passenger_static_content_handler': /usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.41/ext/nginx/StaticContentHandler.c:72: error: 'ngx_http_request_t' has no member named 'zero_in_uri' make[1]: *** [objs/addon/nginx/StaticContentHandler.o] Error 1 make[1]: Leaving directory `/svr-setup/nginx-1.7.0' make: *** [build] Error 2 ************************************************* Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249586,249586#msg-249586 From nginx-forum at nginx.us Fri Apr 25 05:17:56 2014 From: nginx-forum at nginx.us (George) Date: Fri, 25 Apr 2014 01:17:56 -0400 Subject: Nginx 1.7.0 failed make with Phusion Passenger ? In-Reply-To: References: Message-ID: <2ab6c04a72774ca55fba99e728e3e082.NginxMailingListEnglish@forum.nginx.org> grep -C10 zero_in_uri /usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.41/ext/nginx/StaticContentHandler.c if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD|NGX_HTTP_POST))) { return NGX_HTTP_NOT_ALLOWED; } if (r->uri.data[r->uri.len - 1] == '/') { return NGX_DECLINED; } #if (PASSENGER_NGINX_MINOR_VERSION == 8 && PASSENGER_NGINX_MICRO_VERSION < 38) || \ (PASSENGER_NGINX_MINOR_VERSION == 7 && PASSENGER_NGINX_MICRO_VERSION < 66) if (r->zero_in_uri) { return NGX_DECLINED; } #endif log = r->connection->log; ngx_log_debug1(NGX_LOG_DEBUG_HTTP, log, 0, "http filename: \"%s\"", filename->data); clcf = ngx_http_get_module_loc_conf(r, ngx_http_core_module); Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249586,249587#msg-249587 From nginx-forum at nginx.us Fri Apr 25 05:34:41 2014 From: nginx-forum at nginx.us (George) Date: Fri, 25 Apr 2014 01:34:41 -0400 Subject: Nginx 1.7.0 failed make with Phusion Passenger ? In-Reply-To: <2ab6c04a72774ca55fba99e728e3e082.NginxMailingListEnglish@forum.nginx.org> References: <2ab6c04a72774ca55fba99e728e3e082.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8553022cad6dbd9e0759b7784b4c5243.NginxMailingListEnglish@forum.nginx.org> removing these lines in /usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.41/ext/nginx/StaticContentHandler.c seem to have allowed it to compile properly if (r->zero_in_uri) { return NGX_DECLINED; } nginx -V nginx version: nginx/1.7.0 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-4) (GCC) TLS SNI support enabled configure arguments: --sbin-path=/usr/local/sbin --conf-path=/usr/local/nginx/conf/nginx.conf --with-http_ssl_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_sub_module --with-http_addition_module --with-http_image_filter_module --with-http_secure_link_module --with-http_flv_module --with-http_realip_module --with-openssl-opt=enable-tlsext --add-module=../ngx-fancyindex-ngx-fancyindex --add-module=../ngx_cache_purge-2.1 --add-module=../headers-more-nginx-module-0.25 --add-module=../nginx-accesskey-2.0.3 --add-module=../nginx-http-concat-master --with-http_dav_module --add-module=../nginx-dav-ext-module-0.0.3 --add-module=/usr/local/rvm/gems/ruby-2.1.1/gems/passenger-4.0.41/ext/nginx --with-openssl=../openssl-1.0.1g --with-libatomic --with-pcre=../pcre-8.34 --with-pcre-jit --with-http_spdy_module --add-module=../ngx_pagespeed-release-1.7.30.4-beta Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249586,249588#msg-249588 From yatiohi at ideopolis.gr Fri Apr 25 07:25:21 2014 From: yatiohi at ideopolis.gr (Christos Trochalakis) Date: Fri, 25 Apr 2014 10:25:21 +0300 Subject: nginx-1.6.0 In-Reply-To: <20140424131403.GV34696@mdounin.ru> References: <20140424131403.GV34696@mdounin.ru> Message-ID: <20140425072521.GA12026@luke.ws.skroutz.gr> On Thu, Apr 24, 2014 at 05:14:04PM +0400, Maxim Dounin wrote: >Changes with nginx 1.6.0 24 Apr 2014 > > *) 1.6.x stable branch. > FYI, nginx 1.6.0-1 has been uploaded to debian sid (unstable). When it migrates to testing, we will also upload it to wheezy-backports. -chris From rva at onvaoo.com Fri Apr 25 07:35:23 2014 From: rva at onvaoo.com (Ronald Van Assche) Date: Fri, 25 Apr 2014 09:35:23 +0200 Subject: nginx-1.6.0 In-Reply-To: <20140424131403.GV34696@mdounin.ru> References: <20140424131403.GV34696@mdounin.ru> Message-ID: no Freebsd port ? Thanks. Le 24 avr. 2014 ? 15:14, Maxim Dounin a ?crit : > Changes with nginx 1.6.0 24 Apr 2014 > > *) 1.6.x stable branch. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Fri Apr 25 07:59:11 2014 From: nginx-forum at nginx.us (FooBarWidget) Date: Fri, 25 Apr 2014 03:59:11 -0400 Subject: Nginx 1.7.0 failed make with Phusion Passenger ? In-Reply-To: References: Message-ID: I'm one of the Phusion Passenger authors. For Phusion Passenger support, please use the Phusion Passenger discussion forum, not the Nginx forum. It's here: https://groups.google.com/forum/#!forum/phusion-passenger This is a compilation problem due to some old code which tries to support Nginx 0.7. I've just fixed this, and the fix will be available in the next version, 4.0.42. https://github.com/phusion/passenger/commit/5c1a24d06a0ea5c3f6982b4ebe6322c4eb818601 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249586,249591#msg-249591 From nginx-forum at nginx.us Fri Apr 25 08:32:15 2014 From: nginx-forum at nginx.us (George) Date: Fri, 25 Apr 2014 04:32:15 -0400 Subject: Nginx 1.7.0 failed make with Phusion Passenger ? In-Reply-To: References: Message-ID: <5b1e84459cc52bcde5a0086a7478b03d.NginxMailingListEnglish@forum.nginx.org> thanks for the reply and fix :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249586,249595#msg-249595 From mdounin at mdounin.ru Fri Apr 25 12:56:00 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 25 Apr 2014 16:56:00 +0400 Subject: nginx-1.6.0 In-Reply-To: References: <20140424131403.GV34696@mdounin.ru> Message-ID: <20140425125600.GR34696@mdounin.ru> Hello! On Fri, Apr 25, 2014 at 09:35:23AM +0200, Ronald Van Assche wrote: > no Freebsd port ? I believe Sergey Osokin (port maintainer) will update www/nginx and www/nginx-devel ports shortly. If you can't wait, just change the version in Makefile yourself (and update distinfo accordingly, or just use "make makesum"). > > Thanks. > > Le 24 avr. 2014 ? 15:14, Maxim Dounin a ?crit : > > > Changes with nginx 1.6.0 24 Apr 2014 > > > > *) 1.6.x stable branch. > > > > > > -- > > Maxim Dounin > > http://nginx.org/en/donation.html > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Apr 25 13:07:56 2014 From: nginx-forum at nginx.us (roinacio) Date: Fri, 25 Apr 2014 09:07:56 -0400 Subject: Rewrite with strange arguments Message-ID: <747ad63c2a92bdebadc478288eed73f4.NginxMailingListEnglish@forum.nginx.org> Hi, I need to do some rewrite rules litke this myurl.com/?mpinvite=113116712&host=An%C3%B4nimo to myurl.com/?mpinvite=113116712&host=Anonimo or myurl.com/?mpinvite=113116712&host=An?nimo It is possible? How can I do that? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249630,249630#msg-249630 From pug at felsing.net Fri Apr 25 15:22:33 2014 From: pug at felsing.net (Christian Felsing) Date: Fri, 25 Apr 2014 17:22:33 +0200 Subject: http://forum.nginx.org/read.php?29,246309,246309#msg-246309 Message-ID: <535A7DB9.2050503@felsing.net> Hello, are there plans to incorporate that patch http://forum.nginx.org/read.php?29,246309,246309#msg-246309 into Nginx? I would like to use Nginx as IMAP/POP3 with TLS client certificate authentication. At this time Nginx mail module does not support that. best regards Christian Felsing From mdounin at mdounin.ru Fri Apr 25 15:30:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 25 Apr 2014 19:30:58 +0400 Subject: http://forum.nginx.org/read.php?29,246309,246309#msg-246309 In-Reply-To: <535A7DB9.2050503@felsing.net> References: <535A7DB9.2050503@felsing.net> Message-ID: <20140425153058.GX34696@mdounin.ru> Hello! On Fri, Apr 25, 2014 at 05:22:33PM +0200, Christian Felsing wrote: > Hello, > > are there plans to incorporate that patch > http://forum.nginx.org/read.php?29,246309,246309#msg-246309 into > Nginx? > > I would like to use Nginx as IMAP/POP3 with TLS client > certificate authentication. At this time Nginx mail module does > not support that. Latest work on this seems to be in this thread: http://mailman.nginx.org/pipermail/nginx-devel/2014-March/005067.html http://mailman.nginx.org/pipermail/nginx-devel/2014-April/005179.html The code yet to be improved though. -- Maxim Dounin http://nginx.org/ From thuban at yeuxdelibad.net Fri Apr 25 16:06:34 2014 From: thuban at yeuxdelibad.net (Thuban) Date: Fri, 25 Apr 2014 18:06:34 +0200 Subject: use subdirectories instead of subdomains Message-ID: <20140425160634.GB7327@Lothlorien> Hello, I am trying to use subdirectories instead of subdomains because my host doesn't support subdomains. So, instead of having : - http://owncloud.example.com - http://wordpress.example.com - http://anyservice.example.com I would like to have : - http://example.com/owncloud - http://example.com/wordpress - http://example.com/anyservice I tried to do such things with `location` rules and `alias`.Example : root /var/www/mysite; location /owncloud { alias /var/www/mysite/owncloud; include /etc/nginx/conf.d/owncloud.conf; } , but services like owncloud need `location` rules too, so I finally have "location /example is outside location" errors. How can I configure nginx for this? Regards, -- ,--. : /` ) Thuban | `-' PubKey : http://yeuxdelibad.net/Divers/thuban.pub \_ KeyID : 0x54CD2F2F Envoy? ? partir de mon serveur auto-h?berg? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From jim at ohlste.in Fri Apr 25 16:53:54 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 25 Apr 2014 12:53:54 -0400 Subject: use subdirectories instead of subdomains In-Reply-To: <20140425160634.GB7327@Lothlorien> References: <20140425160634.GB7327@Lothlorien> Message-ID: <535A9322.5020806@ohlste.in> Hello, On 4/25/14, 12:06 PM, Thuban wrote: > Hello, > I am trying to use subdirectories instead of subdomains because my host > doesn't support subdomains. First suggestion is get a better host. > > So, instead of having : > > - http://owncloud.example.com > - http://wordpress.example.com > - http://anyservice.example.com > > I would like to have : > - http://example.com/owncloud > - http://example.com/wordpress > - http://example.com/anyservice > > I tried to do such things with `location` rules and `alias`.Example : > > root /var/www/mysite; > location /owncloud { > alias /var/www/mysite/owncloud; > include /etc/nginx/conf.d/owncloud.conf; > } > > , but services like owncloud need `location` rules too, so I finally > have "location /example is outside location" errors. > > How can I configure nginx for this? Why are you using an alias here? If the root is /var/www/mysite then location /owncloud would be interpreted as /var/www/mysite/owncloud which I'm guessing is what you want. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From thuban at yeuxdelibad.net Fri Apr 25 17:17:47 2014 From: thuban at yeuxdelibad.net (Thuban) Date: Fri, 25 Apr 2014 19:17:47 +0200 Subject: use subdirectories instead of subdomains In-Reply-To: <535A9322.5020806@ohlste.in> References: <20140425160634.GB7327@Lothlorien> <535A9322.5020806@ohlste.in> Message-ID: <20140425171747.GA3833@Lothlorien> > > root /var/www/mysite; > > location /owncloud { > > alias /var/www/mysite/owncloud; > > include /etc/nginx/conf.d/owncloud.conf; > > } > > > >, but services like owncloud need `location` rules too, so I finally > >have "location /example is outside location" errors. > > > >How can I configure nginx for this? > > Why are you using an alias here? If the root is /var/www/mysite then > > location /owncloud > > would be interpreted as /var/www/mysite/owncloud which I'm guessing > is what you want. Because the owncloud.conf contains `location` rules like this : location = /robots.txt { allow all; log_not_found off; access_log off; } location / { # The following 2 rules are only needed with webfinger rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/carddav /remote.php/carddav/ redirect; rewrite ^/.well-known/caldav /remote.php/caldav/ redirect; rewrite ^(/core/doc/[^\/]+/)$ $1/index.html; try_files $uri $uri/ index.php; } # deny direct access location ~ ^/(data|config|\.ht|db_structure\.xml|README) { deny all; } # enable php location ~ ^(.+?\.php)(/.*)?$ { try_files $1 = 404; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$1; fastcgi_param PATH_INFO $2; fastcgi_param HTTPS on; fastcgi_pass unix:/var/run/php5-fpm.sock; } -- ,--. : /` ) Thuban | `-' PubKey : http://yeuxdelibad.net/Divers/thuban.pub \_ KeyID : 0x54CD2F2F Envoy? ? partir de mon serveur auto-h?berg? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From jim at ohlste.in Fri Apr 25 18:17:33 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 25 Apr 2014 14:17:33 -0400 Subject: use subdirectories instead of subdomains In-Reply-To: <20140425171747.GA3833@Lothlorien> References: <20140425160634.GB7327@Lothlorien> <535A9322.5020806@ohlste.in> <20140425171747.GA3833@Lothlorien> Message-ID: <535AA6BD.6020701@ohlste.in> Hello, On 4/25/14, 1:17 PM, Thuban wrote: >>> root /var/www/mysite; >>> location /owncloud { >>> alias /var/www/mysite/owncloud; >>> include /etc/nginx/conf.d/owncloud.conf; >>> } >>> >>> , but services like owncloud need `location` rules too, so I finally >>> have "location /example is outside location" errors. >>> >>> How can I configure nginx for this? >> >> Why are you using an alias here? If the root is /var/www/mysite then >> >> location /owncloud >> >> would be interpreted as /var/www/mysite/owncloud which I'm guessing >> is what you want. > > Because the owncloud.conf contains `location` rules like this : > > location = /robots.txt { > allow all; > log_not_found off; > access_log off; > } > location / { > # The following 2 rules are only needed with webfinger > rewrite ^/.well-known/host-meta /public.php?service=host-meta last; > rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; > > rewrite ^/.well-known/carddav /remote.php/carddav/ redirect; > rewrite ^/.well-known/caldav /remote.php/caldav/ redirect; > > rewrite ^(/core/doc/[^\/]+/)$ $1/index.html; > > try_files $uri $uri/ index.php; > } > > # deny direct access > location ~ ^/(data|config|\.ht|db_structure\.xml|README) { > deny all; > } > > # enable php > location ~ ^(.+?\.php)(/.*)?$ { > try_files $1 = 404; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$1; > fastcgi_param PATH_INFO $2; > fastcgi_param HTTPS on; > fastcgi_pass unix:/var/run/php5-fpm.sock; > > } > I'm still not sure you've actually given a reason why you need an alias. Those rules appear to be more or less a direct copy of the rules which are at http://doc.owncloud.org/server/5.0/admin_manual/installation/installation_others.html. In my personal experience, they work perfectly well on ownCloud 6. You're almost certainly seeing "outside location" errors because of issues with the root path or because of the way you have written the included file. I'd suggest following the exact instructions in the above link without an included file and *without* an unnecessary alias. If they don't work, try rewriting them without nested locations. Use the full path for each location. Read the docs at http://nginx.org/en/docs/http/ngx_http_core_module.html#location to understand how locations are matched and this entire problem will be much easier to understand. If you can get them working without nested locations, you can nest some if you want, but consider reading this thread about nested locations: http://forum.nginx.org/read.php?2,174517,174517. -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at nginx.us Fri Apr 25 19:34:54 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 25 Apr 2014 15:34:54 -0400 Subject: use subdirectories instead of subdomains In-Reply-To: <20140425160634.GB7327@Lothlorien> References: <20140425160634.GB7327@Lothlorien> Message-ID: <04079b458a864a9e325d98bcd34f6d2f.NginxMailingListEnglish@forum.nginx.org> Thuban Wrote: ------------------------------------------------------- > Hello, > I am trying to use subdirectories instead of subdomains because my > host > doesn't support subdomains. > http://forum.nginx.org/read.php?11,249636,249642#msg-249642 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249634,249643#msg-249643 From thuban at yeuxdelibad.net Fri Apr 25 19:59:42 2014 From: thuban at yeuxdelibad.net (Thuban) Date: Fri, 25 Apr 2014 21:59:42 +0200 Subject: use subdirectories instead of subdomains In-Reply-To: <535AA6BD.6020701@ohlste.in> References: <20140425160634.GB7327@Lothlorien> <535A9322.5020806@ohlste.in> <20140425171747.GA3833@Lothlorien> <535AA6BD.6020701@ohlste.in> Message-ID: <20140425195942.GA3541@Lothlorien> * Jim Ohlstein le [25-04-2014 14:17:33 -0400]: > Hello, > > On 4/25/14, 1:17 PM, Thuban wrote: > >>> root /var/www/mysite; > >>> location /owncloud { > >>> alias /var/www/mysite/owncloud; > >>> include /etc/nginx/conf.d/owncloud.conf; > >>> } > >>> > >>>, but services like owncloud need `location` rules too, so I finally > >>>have "location /example is outside location" errors. > >>> > >>>How can I configure nginx for this? > >> > >>Why are you using an alias here? If the root is /var/www/mysite then > >> > >>location /owncloud > >> > >>would be interpreted as /var/www/mysite/owncloud which I'm guessing > >>is what you want. > > > >Because the owncloud.conf contains `location` rules like this : > > > > location = /robots.txt { > > allow all; > > log_not_found off; > > access_log off; > > } > > location / { > > # The following 2 rules are only needed with webfinger > > rewrite ^/.well-known/host-meta /public.php?service=host-meta last; > > rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; > > > > rewrite ^/.well-known/carddav /remote.php/carddav/ redirect; > > rewrite ^/.well-known/caldav /remote.php/caldav/ redirect; > > > > rewrite ^(/core/doc/[^\/]+/)$ $1/index.html; > > > > try_files $uri $uri/ index.php; > > } > > > > # deny direct access > > location ~ ^/(data|config|\.ht|db_structure\.xml|README) { > > deny all; > > } > > > > # enable php > > location ~ ^(.+?\.php)(/.*)?$ { > > try_files $1 = 404; > > include fastcgi_params; > > fastcgi_param SCRIPT_FILENAME $document_root$1; > > fastcgi_param PATH_INFO $2; > > fastcgi_param HTTPS on; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > > > > } > > > > I'm still not sure you've actually given a reason why you need an alias. > Infact, I don't have a good reason for using alias, I just found this proposal on the web while I was trying to configure this. I also would like to use includes, because I might need to add other services on the host and keeping things clean. The idea is to define some subdirectories as is they were "new root". Sorry if my english isn't clear... Thank you for links, I will read. Regards -- ,--. : /` ) Thuban | `-' PubKey : http://yeuxdelibad.net/Divers/thuban.pub \_ KeyID : 0x54CD2F2F Envoy? ? partir de mon serveur auto-h?berg? -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From jim at ohlste.in Fri Apr 25 20:44:26 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 25 Apr 2014 16:44:26 -0400 Subject: use subdirectories instead of subdomains In-Reply-To: <20140425195942.GA3541@Lothlorien> References: <20140425160634.GB7327@Lothlorien> <535A9322.5020806@ohlste.in> <20140425171747.GA3833@Lothlorien> <535AA6BD.6020701@ohlste.in> <20140425195942.GA3541@Lothlorien> Message-ID: <535AC92A.9080706@ohlste.in> Hello, On 4/25/14, 3:59 PM, Thuban wrote: > * Jim Ohlstein le [25-04-2014 14:17:33 -0400]: >> Hello, >> >> On 4/25/14, 1:17 PM, Thuban wrote: [snip] >> >> I'm still not sure you've actually given a reason why you need an alias. >> > Infact, I don't have a good reason for using alias, I just found this > proposal on the web while I was trying to configure this. Blindly following a "tutorial" without understanding what it does can be a recipe for problems like this. > > I also would like to use includes, because I might need to add other > services on the host and keeping things clean. > > The idea is to define some subdirectories as is they were "new root". I agree that they provide for easy maintenance. However, in this case you have errors are coming from the included file. That's why I said to try putting it all in your nginx.conf first until you get it working. For me (and perhaps only me), I find it easier working with one file when I'm trying to debug a configuration problem. > > Sorry if my english isn't clear... > > Thank you for links, I will read. > -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From nginx-forum at nginx.us Fri Apr 25 22:54:36 2014 From: nginx-forum at nginx.us (kustodian) Date: Fri, 25 Apr 2014 18:54:36 -0400 Subject: No SPDY support in the official repository packages In-Reply-To: <531DB583.8030502@comcast.net> References: <531DB583.8030502@comcast.net> Message-ID: <810d0f81c5c2dde71548a60ea7c1248b.NginxMailingListEnglish@forum.nginx.org> I would just like to confirm that Nginx 1.6.0 packages for Centos 6 are compiled with SPDY support. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245553,249648#msg-249648 From nginx-forum at nginx.us Sat Apr 26 02:35:44 2014 From: nginx-forum at nginx.us (FlappySocks) Date: Fri, 25 Apr 2014 22:35:44 -0400 Subject: Nginx Websocket proxy dropping frames In-Reply-To: References: Message-ID: <2b4d097f5e45976b95d3841bb3839745.NginxMailingListEnglish@forum.nginx.org> After analysing the data stream, Nginx is indeed streaming the data. The difference is Nginx is buffering it into one continuous stream, where as the data from the browsers is fragmented. The websocket implementation I was using needed fixing. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249575,249649#msg-249649 From nginx-forum at nginx.us Sat Apr 26 04:11:11 2014 From: nginx-forum at nginx.us (xfeep) Date: Sat, 26 Apr 2014 00:11:11 -0400 Subject: nginx clojure module v0.2.0-Let MySQL JDBC Driver & Apache HttpClient Fly With Epoll/Kqueue on Nginx Message-ID: <4123dcc19060c174a02d8da425340b56.NginxMailingListEnglish@forum.nginx.org> nginx-clojure v0.2.0 includes new features: (1) non-blocking socket based on coroutine and compatible with largely existing java library such as apache http client, mysql jdbc drivers (2) asynchronous callback API of socket for some advanced usage (3) run initialization clojure code when nginx worker starting (4) provide a build-in tool to make setting of coroutine based socket easier (5) support Linux 32bit x86 now (6) publish [binary release compiled with lastes stable nginx 1.6.0](https://sourceforge.net/projects/nginx-clojure/files/) about Linux x64, Linux i586, Win32 & MacOS X. If the http service should do some slow I/O operations such as access external http service, database, etc. nginx worker will be blocked by those operations and the new user request even static file request will be blocked. It really sucks? Before v0.2.0 the only choice is using thread pool but now we have three choice Now: (1) Coroutine based Socket -- Let MySQL JDBC Driver & Apache HttpClient Fly With Epoll/Kqueue on Nginx (a) Java Socket API Compatible and work well with largely existing java library such as apache http client, mysql jdbc drivers etc. (b) non-blocking, cheap, fast and let one java main thread be able to handle thousands of connections. (c) Your old code **_need not be changed_** and those plain and old java socket based code such as Apache Http Client, MySQL mysql jdbc drivers etc. will be on the fly with epoll/kqueue on Linux/BSD! (d) You must do some steps to get the right class waving configuration file and set it in the nginx conf file. (2) Asynchronous Socket More details here: https://github.com/nginx-clojure/nginx-clojure (3) Thread Pool More details here : https://github.com/nginx-clojure/nginx-clojure More details please visit nginx-clojure github site : https://github.com/nginx-clojure/nginx-clojure Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249650,249650#msg-249650 From nginx-forum at nginx.us Sat Apr 26 04:15:38 2014 From: nginx-forum at nginx.us (xfeep) Date: Sat, 26 Apr 2014 00:15:38 -0400 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: References: Message-ID: <0e9712eead4957df2f9f88b68f54bcca.NginxMailingListEnglish@forum.nginx.org> Hi reberto, Nginx Clojure Module V0.2.0 provides three choices for handling blocked I/O with Java/Clojure Now. (1) Coroutine based Socket It's Non-blocking Java Socket API Compatible and work well with largely existing java library such as apache http client, mysql jdbc drivers etc. non-blocking, cheap, fast and let one java main thread be able to handle thousands of connections. (2) Asynchronous Socket (3) Thread Pool Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,249651#msg-249651 From pug at felsing.net Sat Apr 26 09:24:59 2014 From: pug at felsing.net (Christian Felsing) Date: Sat, 26 Apr 2014 11:24:59 +0200 Subject: http://forum.nginx.org/read.php?29,246309,246309#msg-246309 In-Reply-To: <20140425153058.GX34696@mdounin.ru> References: <535A7DB9.2050503@felsing.net> <20140425153058.GX34696@mdounin.ru> Message-ID: <535B7B6B.6030101@felsing.net> Hello, is that patch available somewhere in Nginx Mercurial? Christian Am 25.04.2014 17:30, schrieb Maxim Dounin: > Latest work on this seems to be in this thread: > > http://mailman.nginx.org/pipermail/nginx-devel/2014-March/005067.html > http://mailman.nginx.org/pipermail/nginx-devel/2014-April/005179.html > > The code yet to be improved though. > From nginx-forum at nginx.us Sun Apr 27 12:25:47 2014 From: nginx-forum at nginx.us (leev) Date: Sun, 27 Apr 2014 08:25:47 -0400 Subject: nginx and GeoLite2 In-Reply-To: <20131022073559.GF89843@lo0.su> References: <20131022073559.GF89843@lo0.su> Message-ID: Hi, If you're still looking to use the GeoIP2/GeoLite2 databases, a module is now available at https://github.com/leev/ngx_http_geoip2_module. Cheers, Lee Posted at Nginx Forum: http://forum.nginx.org/read.php?2,243887,249663#msg-249663 From thuban at yeuxdelibad.net Sun Apr 27 16:16:01 2014 From: thuban at yeuxdelibad.net (Thuban) Date: Sun, 27 Apr 2014 18:16:01 +0200 Subject: use subdirectories instead of subdomains In-Reply-To: <535AC92A.9080706@ohlste.in> References: <20140425160634.GB7327@Lothlorien> <535A9322.5020806@ohlste.in> <20140425171747.GA3833@Lothlorien> <535AA6BD.6020701@ohlste.in> <20140425195942.GA3541@Lothlorien> <535AC92A.9080706@ohlste.in> Message-ID: <20140427161601.GA3980@Lothlorien> I managed to have something working by adding the complete path before each location instructions like on this thread [1] It would have been great to define a "new root" instead of full path each times, but whatever. Regards, [1] : https://bbs.archlinux.org/viewtopic.php?pid=1408342#p1408342 -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 836 bytes Desc: Digital signature URL: From nginx-forum at nginx.us Sun Apr 27 20:41:35 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 27 Apr 2014 16:41:35 -0400 Subject: nginx proxy for syncml In-Reply-To: <15b0781fcbe04b0c4f5602b2ffd9c92c.NginxMailingListEnglish@forum.nginx.org> References: <15b0781fcbe04b0c4f5602b2ffd9c92c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1a06c4b0b41d23b9b1a40f3f3b075590.NginxMailingListEnglish@forum.nginx.org> Solved ! Get https://github.com/yaoweibin/nginx_ajp_module add it (works for Windows as well, for which pull requests are outstanding to make it work) and configure it: location /app/syncml { ajp_keep_conn on; ajp_pass tomcatbackend:8009; include ./conf/proxy.conf; proxy_set_header Accept-Encoding ""; keepalive_timeout 600; keepalive_requests 500; proxy_http_version 1.1; proxy_ignore_client_abort on; } Ajp will be added to the next release of nginx for Windows. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249462,249668#msg-249668 From nginx-forum at nginx.us Sun Apr 27 21:37:26 2014 From: nginx-forum at nginx.us (ura) Date: Sun, 27 Apr 2014 17:37:26 -0400 Subject: 1.7.01 mainline on debian has installed a wrong package Message-ID: after running the upgrade to 1.7.01 mainline version on debian, the nginx version check (service nginx -V) lists: 0.91-ubuntu1 - even though 1.7.01 is listed in the package manager in debian. does this mean that debains repos are serving an incorrect package? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249670,249670#msg-249670 From contact at jpluscplusm.com Sun Apr 27 21:55:23 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 27 Apr 2014 22:55:23 +0100 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: References: Message-ID: On 27 April 2014 22:37, ura wrote: > after running the upgrade to 1.7.01 mainline version on debian, the nginx > version check (service nginx -V) lists: > 0.91-ubuntu1 - even though 1.7.01 is listed in the package manager in > debian. > > does this mean that debains repos are serving an incorrect package? You will have more than one repo enabled which is capable of providing the "nginx" package, I suspect. Try "apt-cache policy nginx" (or nginx-full, depending on what you think you have installed) to see the versions your system knows about, and where it thinks they come from. Then work out how to get the correct version: disable a repo, or install a different package name, or apt-pin a version, or something else; you'll have to have a good poke at it. J From nginx-forum at nginx.us Sun Apr 27 22:00:33 2014 From: nginx-forum at nginx.us (ura) Date: Sun, 27 Apr 2014 18:00:33 -0400 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: References: Message-ID: <0f3cda61a569eb22ea543e4ca3396ead.NginxMailingListEnglish@forum.nginx.org> thanks for assisting. i ran that command and see 3 repos which provide nginx. from what i see there, the 1.7.0-1 wheezy package is the candidate and also has been installed. i just checked my local development machine which is running lmde - the version of nginx is also the same spurious ubuntu 0.91 version.. yet the 2 machine are using different repos. i am not seeing 0.91 listed anywhere other than when i run service nginx -V Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249670,249672#msg-249672 From contact at jpluscplusm.com Sun Apr 27 22:11:17 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 27 Apr 2014 23:11:17 +0100 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: <0f3cda61a569eb22ea543e4ca3396ead.NginxMailingListEnglish@forum.nginx.org> References: <0f3cda61a569eb22ea543e4ca3396ead.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 27 April 2014 23:00, ura wrote: > thanks for assisting. i ran that command and see 3 repos which provide > nginx. from what i see there, the 1.7.0-1 wheezy package is the candidate > and also has been installed. > > i just checked my local development machine which is running lmde - the > version of nginx is also the same spurious ubuntu 0.91 version.. yet the 2 > machine are using different repos. > > i am not seeing 0.91 listed anywhere other than when i run service nginx -V "which nginx" will show you the specific binary you're running when you execute nginx-V. It may well be a different one from that which your initscript invokes. "dpkg -L nginx" (or whichever package you have installed) will show you which files on disk are from that package. I suspect you'll discover your actual problem by examining the the output of these two commands. J From nginx-forum at nginx.us Sun Apr 27 22:14:43 2014 From: nginx-forum at nginx.us (ura) Date: Sun, 27 Apr 2014 18:14:43 -0400 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: References: Message-ID: <916921b1bc394e3aa76c2ad0bbcfe178.NginxMailingListEnglish@forum.nginx.org> those two commands don't show any version numbers, so i am not presently any closer to identifying the issue here. the paths returned look fine to me, from what i know already. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249670,249674#msg-249674 From contact at jpluscplusm.com Sun Apr 27 22:23:53 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sun, 27 Apr 2014 23:23:53 +0100 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: <916921b1bc394e3aa76c2ad0bbcfe178.NginxMailingListEnglish@forum.nginx.org> References: <916921b1bc394e3aa76c2ad0bbcfe178.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 27 April 2014 23:14, ura wrote: > those two commands don't show any version numbers No, they don't - they show paths. > so i am not presently any > closer to identifying the issue here. > the paths returned look fine to me, from what i know already. Ok. Good luck finding the problem. Check your init script. J From vbart at nginx.com Sun Apr 27 22:24:31 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 28 Apr 2014 02:24:31 +0400 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: References: Message-ID: <14572021.sUdDEhFkqQ@vbart-laptop> On Sunday 27 April 2014 17:37:26 ura wrote: > after running the upgrade to 1.7.01 mainline version on debian, the nginx > version check (service nginx -V) lists: > 0.91-ubuntu1 - even though 1.7.01 is listed in the package manager in > debian. > > does this mean that debains repos are serving an incorrect package? > By running "service nginx -V" you're checking the version of "service". wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sun Apr 27 22:29:50 2014 From: nginx-forum at nginx.us (ura) Date: Sun, 27 Apr 2014 18:29:50 -0400 Subject: 1.7.01 mainline on debian has installed a wrong package In-Reply-To: <14572021.sUdDEhFkqQ@vbart-laptop> References: <14572021.sUdDEhFkqQ@vbart-laptop> Message-ID: aha! yes, i needed to remove 'service'. now i see the correct 1.7 version code. thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249670,249677#msg-249677 From joydeep.bakshi at netzrezepte.de Mon Apr 28 11:43:57 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 28 Apr 2014 17:13:57 +0530 Subject: can multiple domain points a single nginx host with server_name ? Message-ID: Hello list, I am in a process to configure nginx infront of apache. For vhost having single domain like www.mydomain.com & mydomain.com ; there is no issue to configure by server_name directive. But what to do where multiple domain points to a single apache vhost using apache server_alias directive ? Can nginx server_name simply points all those domains to the required vhost of apache ? is nginx [ server_name test1.com test2.com www.test3.com ] equivalent to apache [ servername test1.com serveralias test2.com www.test3.com ] ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Apr 28 12:25:17 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Mon, 28 Apr 2014 08:25:17 -0400 Subject: Nginx as a single process Message-ID: <8c8d5db69728665882dc41c8ef8cf8e6.NginxMailingListEnglish@forum.nginx.org> Hi, Can anyone please help me to run nginx as a single process model (threads instead of processes). I am interested on this as I am more incline to run this with a usermode TCP like netmap-rumptcpip. Anyone has done this or investigating on this ? Thanks, Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249686,249686#msg-249686 From nginx-forum at nginx.us Mon Apr 28 12:35:40 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Mon, 28 Apr 2014 08:35:40 -0400 Subject: Nginx as a single process In-Reply-To: <8c8d5db69728665882dc41c8ef8cf8e6.NginxMailingListEnglish@forum.nginx.org> References: <8c8d5db69728665882dc41c8ef8cf8e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7fb295422bdcf3974415d5e33481e76b.NginxMailingListEnglish@forum.nginx.org> I tried to compile 1.6.0 with --with-threads. But, looks like this is no longer supported. #--with-threads=*) USE_THREADS="$value" ;; #--with-threads) USE_THREADS="pthreads" ;; Can anyone please comment on this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249686,249688#msg-249688 From nginx-forum at nginx.us Mon Apr 28 12:50:01 2014 From: nginx-forum at nginx.us (roinacio) Date: Mon, 28 Apr 2014 08:50:01 -0400 Subject: Rewrite with strange arguments In-Reply-To: <747ad63c2a92bdebadc478288eed73f4.NginxMailingListEnglish@forum.nginx.org> References: <747ad63c2a92bdebadc478288eed73f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Any help ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249630,249690#msg-249690 From mdounin at mdounin.ru Mon Apr 28 12:55:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Apr 2014 16:55:36 +0400 Subject: can multiple domain points a single nginx host with server_name ? In-Reply-To: References: Message-ID: <20140428125536.GB34696@mdounin.ru> Hello! On Mon, Apr 28, 2014 at 05:13:57PM +0530, Joydeep Bakshi wrote: > Hello list, > > I am in a process to configure nginx infront of apache. For vhost having > single domain like www.mydomain.com & mydomain.com ; there is no issue to > configure by server_name directive. > > But what to do where multiple domain points to a single apache vhost using > apache server_alias directive ? Can nginx server_name simply points all > those domains to the required vhost of apache ? > > is > > nginx [ server_name test1.com test2.com www.test3.com ] > > equivalent to > > apache [ > servername test1.com > serveralias test2.com www.test3.com ] > > ? Yes. http://nginx.org/r/server_name -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Apr 28 12:57:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Apr 2014 16:57:11 +0400 Subject: Nginx as a single process In-Reply-To: <7fb295422bdcf3974415d5e33481e76b.NginxMailingListEnglish@forum.nginx.org> References: <8c8d5db69728665882dc41c8ef8cf8e6.NginxMailingListEnglish@forum.nginx.org> <7fb295422bdcf3974415d5e33481e76b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140428125711.GC34696@mdounin.ru> Hello! On Mon, Apr 28, 2014 at 08:35:40AM -0400, nginxsantos wrote: > I tried to compile 1.6.0 with --with-threads. But, looks like this is no > longer supported. > > #--with-threads=*) USE_THREADS="$value" ;; > #--with-threads) USE_THREADS="pthreads" ;; > > Can anyone please comment on this. It's a long-dead code, a leftover from previous experiments with threads. It doesn't work and shouldn't be used. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Apr 28 13:07:37 2014 From: nginx-forum at nginx.us (nginxsantos) Date: Mon, 28 Apr 2014 09:07:37 -0400 Subject: Nginx as a single process In-Reply-To: <20140428125711.GC34696@mdounin.ru> References: <20140428125711.GC34696@mdounin.ru> Message-ID: <1aaf91e9700534f1e16a0fb0ea0c109f.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks for the response. Are you saying if I convert the processes to threads may be through pthread or rfork, it is not going to work? The thread model is not supported at all? Thanks, Santos Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249686,249694#msg-249694 From contact at jpluscplusm.com Mon Apr 28 13:16:43 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 28 Apr 2014 14:16:43 +0100 Subject: can multiple domain points a single nginx host with server_name ? In-Reply-To: References: Message-ID: On 28 Apr 2014 12:44, "Joydeep Bakshi" wrote: > is > > nginx [ server_name test1.com test2.com www.test3.com ] > > equivalent to > > apache [ > servername test1.com > serveralias test2.com www.test3.com ] > > ? As Maxim says, yes. If you have hardcoded names, i believe there are 3 ways to format it: 1 ------------------- server_name foo.example.com foo2.example.com foo3.example.com; 2 ------------------- server_name foo.example.com foo2.example.com foo3.example.com; 3 ------------------- server_name foo.example.com; server_name foo2.example.com; server_name foo3.example.com; --------------------- Note the different semicolon placement in each. They are, I believe, functionally and performance-ly identical. They each have their different uses depending on how you amend, interrogate and share your configurations. E.g. #3 is handy when you'll be grepping for the fixed string "server_name foo2.example.com". Consistency is probably most important, however: choose one style and stick to it :-) HTH, J -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Apr 28 13:26:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Apr 2014 17:26:58 +0400 Subject: Nginx as a single process In-Reply-To: <1aaf91e9700534f1e16a0fb0ea0c109f.NginxMailingListEnglish@forum.nginx.org> References: <20140428125711.GC34696@mdounin.ru> <1aaf91e9700534f1e16a0fb0ea0c109f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140428132657.GD34696@mdounin.ru> Hello! On Mon, Apr 28, 2014 at 09:07:37AM -0400, nginxsantos wrote: > Hi Maxim, > > Thanks for the response. > > Are you saying if I convert the processes to threads may be through pthread > or rfork, it is not going to work? The thread model is not supported at > all? Currently most of thread-related code is broken. It's not expected to work. -- Maxim Dounin http://nginx.org/ From joydeep.bakshi at netzrezepte.de Mon Apr 28 13:27:18 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 28 Apr 2014 18:57:18 +0530 Subject: can multiple domain points a single nginx host with server_name ? In-Reply-To: References: Message-ID: Thanks to both of you On Mon, Apr 28, 2014 at 6:46 PM, Jonathan Matthews wrote: > On 28 Apr 2014 12:44, "Joydeep Bakshi" > wrote: > > is > > > > nginx [ server_name test1.com test2.com www.test3.com ] > > > > equivalent to > > > > apache [ > > servername test1.com > > serveralias test2.com www.test3.com ] > > > > ? > > As Maxim says, yes. > > If you have hardcoded names, i believe there are 3 ways to format it: > > 1 ------------------- > server_name foo.example.com foo2.example.com foo3.example.com; > 2 ------------------- > server_name foo.example.com > foo2.example.com > foo3.example.com; > 3 ------------------- > server_name foo.example.com; > server_name foo2.example.com; > server_name foo3.example.com; > --------------------- > > Note the different semicolon placement in each. They are, I believe, > functionally and performance-ly identical. > > They each have their different uses depending on how you amend, > interrogate and share your configurations. E.g. #3 is handy when you'll be > grepping for the fixed string "server_name foo2.example.com". > > Consistency is probably most important, however: choose one style and > stick to it :-) > > HTH, > J > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Mon Apr 28 13:33:28 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 28 Apr 2014 19:03:28 +0530 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! Message-ID: Hello list, To get the wan IP in apache log I have already enabl mod_rapf in opensude server. # a2enmod rpaf mod_rpaf "rpaf" already present a2enmod mod_rpaf "mod_rpaf" already present Here is a nginx vhost section for passing IP to apache log [......] proxy_redirect off; # Do not redirect this proxy - It needs to be pass-through proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Server-Address $server_addr; proxy_pass_header Set-Cookie; [......] After restarting both apache and nginx, the apache log for that specific vhost still showing 127.0.0.1 as source IP at apache log. Am I missing something ? Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From ar at xlrs.de Mon Apr 28 12:47:29 2014 From: ar at xlrs.de (Axel) Date: Mon, 28 Apr 2014 14:47:29 +0200 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: References: Message-ID: <5aad1b9da571f089c1cd61b9aa25a361@xlrs.de> have you configured apache to log x-forward-for instead of your host header? regards, axel On 2014-04-28 15:33, Joydeep Bakshi wrote: > Hello list, > > To get the wan IP in apache log I have already enabl mod_rapf in > opensude server. > > # a2enmod rpaf mod_rpaf > "rpaf" already present > > ?a2enmod ?mod_rpaf > "mod_rpaf" already present > > Here is a nginx vhost section for passing IP to apache log > > [......] > ? proxy_redirect off; # Do not redirect this proxy - It needs to be > pass-through > ? proxy_set_header Host $host; > ? proxy_set_header X-Real-IP $remote_addr; > ? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > ? proxy_set_header X-Server-Address $server_addr; > ? proxy_pass_header Set-Cookie; > [......] > > After restarting both apache and nginx, the apache log for that > specific vhost still showing 127.0.0.1 as source IP at apache log. > > Am I missing something ? > > Thanks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Apr 28 13:55:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Apr 2014 17:55:37 +0400 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: References: Message-ID: <20140428135537.GE34696@mdounin.ru> Hello! On Mon, Apr 28, 2014 at 07:03:28PM +0530, Joydeep Bakshi wrote: > Hello list, > > To get the wan IP in apache log I have already enabl mod_rapf in opensude > server. > > # a2enmod rpaf mod_rpaf > "rpaf" already present > > a2enmod mod_rpaf > "mod_rpaf" already present > > Here is a nginx vhost section for passing IP to apache log > > [......] > proxy_redirect off; # Do not redirect this proxy - It needs to be > pass-through > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Server-Address $server_addr; > proxy_pass_header Set-Cookie; > [......] > > After restarting both apache and nginx, the apache log for that specific > vhost still showing 127.0.0.1 as source IP at apache log. > > Am I missing something ? Most notably, you've missed configuration of mod_rpaf. It needs to be enabled in configuration, and you have to at least configure IP address it will accept headers from, as well as a header to look into. http://www.stderr.net/apache/rpaf/ -- Maxim Dounin http://nginx.org/ From ar at xlrs.de Mon Apr 28 13:03:22 2014 From: ar at xlrs.de (Axel) Date: Mon, 28 Apr 2014 15:03:22 +0200 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: <20140428135537.GE34696@mdounin.ru> References: <20140428135537.GE34696@mdounin.ru> Message-ID: <5890570a145f7112ae3e09a890cc1b2f@xlrs.de> Hello, are there any advantages of using mod_rpaf instead of using and logging x-forward-for headers? regards, Axel On 2014-04-28 15:55, Maxim Dounin wrote: > Hello! > > On Mon, Apr 28, 2014 at 07:03:28PM +0530, Joydeep Bakshi wrote: > >> Hello list, >> >> To get the wan IP in apache log I have already enabl mod_rapf in >> opensude >> server. >> >> # a2enmod rpaf mod_rpaf >> "rpaf" already present >> >> a2enmod mod_rpaf >> "mod_rpaf" already present >> >> Here is a nginx vhost section for passing IP to apache log >> >> [......] >> proxy_redirect off; # Do not redirect this proxy - It needs to be >> pass-through >> proxy_set_header Host $host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header X-Server-Address $server_addr; >> proxy_pass_header Set-Cookie; >> [......] >> >> After restarting both apache and nginx, the apache log for that >> specific >> vhost still showing 127.0.0.1 as source IP at apache log. >> >> Am I missing something ? > > Most notably, you've missed configuration of mod_rpaf. It needs > to be enabled in configuration, and you have to at least configure > IP address it will accept headers from, as well as a header to > look into. > > http://www.stderr.net/apache/rpaf/ From joydeep.bakshi at netzrezepte.de Mon Apr 28 14:06:32 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 28 Apr 2014 19:36:32 +0530 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: <20140428135537.GE34696@mdounin.ru> References: <20140428135537.GE34696@mdounin.ru> Message-ID: Hello Axel & Maxim, I have modified the apache log format as below LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" common and get the source IP at /var/log/apache/access.log I wonder if I there is any scope to add more info in the common log as it is a production server. And also need the same for ErrorLog too. Any suggestion ? Thanks On Mon, Apr 28, 2014 at 7:25 PM, Maxim Dounin wrote: > Hello! > > On Mon, Apr 28, 2014 at 07:03:28PM +0530, Joydeep Bakshi wrote: > > > Hello list, > > > > To get the wan IP in apache log I have already enabl mod_rapf in opensude > > server. > > > > # a2enmod rpaf mod_rpaf > > "rpaf" already present > > > > a2enmod mod_rpaf > > "mod_rpaf" already present > > > > Here is a nginx vhost section for passing IP to apache log > > > > [......] > > proxy_redirect off; # Do not redirect this proxy - It needs to be > > pass-through > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Server-Address $server_addr; > > proxy_pass_header Set-Cookie; > > [......] > > > > After restarting both apache and nginx, the apache log for that specific > > vhost still showing 127.0.0.1 as source IP at apache log. > > > > Am I missing something ? > > Most notably, you've missed configuration of mod_rpaf. It needs > to be enabled in configuration, and you have to at least configure > IP address it will accept headers from, as well as a header to > look into. > > http://www.stderr.net/apache/rpaf/ > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Mon Apr 28 14:13:49 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Mon, 28 Apr 2014 19:43:49 +0530 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: References: <20140428135537.GE34696@mdounin.ru> Message-ID: Even IP get logged when disable the rpaf !!! little confused. On Mon, Apr 28, 2014 at 7:36 PM, Joydeep Bakshi < joydeep.bakshi at netzrezepte.de> wrote: > Hello Axel & Maxim, > > I have modified the apache log format as below > > LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" > \"%{User-Agent}i\"" common > > and get the source IP at /var/log/apache/access.log > > I wonder if I there is any scope to add more info in the common log as it > is a production server. And also need the same for ErrorLog too. > > Any suggestion ? > > Thanks > > > > On Mon, Apr 28, 2014 at 7:25 PM, Maxim Dounin wrote: > >> Hello! >> >> On Mon, Apr 28, 2014 at 07:03:28PM +0530, Joydeep Bakshi wrote: >> >> > Hello list, >> > >> > To get the wan IP in apache log I have already enabl mod_rapf in >> opensude >> > server. >> > >> > # a2enmod rpaf mod_rpaf >> > "rpaf" already present >> > >> > a2enmod mod_rpaf >> > "mod_rpaf" already present >> > >> > Here is a nginx vhost section for passing IP to apache log >> > >> > [......] >> > proxy_redirect off; # Do not redirect this proxy - It needs to be >> > pass-through >> > proxy_set_header Host $host; >> > proxy_set_header X-Real-IP $remote_addr; >> > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> > proxy_set_header X-Server-Address $server_addr; >> > proxy_pass_header Set-Cookie; >> > [......] >> > >> > After restarting both apache and nginx, the apache log for that specific >> > vhost still showing 127.0.0.1 as source IP at apache log. >> > >> > Am I missing something ? >> >> Most notably, you've missed configuration of mod_rpaf. It needs >> to be enabled in configuration, and you have to at least configure >> IP address it will accept headers from, as well as a header to >> look into. >> >> http://www.stderr.net/apache/rpaf/ >> >> -- >> Maxim Dounin >> http://nginx.org/ >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ar at xlrs.de Mon Apr 28 13:42:10 2014 From: ar at xlrs.de (Axel) Date: Mon, 28 Apr 2014 15:42:10 +0200 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: References: <20140428135537.GE34696@mdounin.ru> Message-ID: <5f2fef973788adce788e4f0382fea9d1@xlrs.de> You only need one. If you use mod_rpaf you have need to configure it like Maxim told you. If you change your common logformat to log x-forwarded-for headers you don't need mod_rpaf regards, Axel On 2014-04-28 16:13, Joydeep Bakshi wrote: > Even IP get logged when disable the rpaf !!! > little confused. > > On Mon, Apr 28, 2014 at 7:36 PM, Joydeep Bakshi > wrote: > >> Hello Axel & Maxim, >> >> I have modified the apache log format as below >> >> LogFormat "%{X-Forwarded-For}i %l %u %t "%r" %>s %b "%{Referer}i" >> "%{User-Agent}i"" common >> >> and get the source IP at /var/log/apache/access.log ? >> >> I wonder if I there is any scope to add more info in the common log >> as it is a production server. And also need the same for ErrorLog >> too. >> >> Any suggestion ? >> >> Thanks >> >> On Mon, Apr 28, 2014 at 7:25 PM, Maxim Dounin >> wrote: >> >>> Hello! >>> >>> On Mon, Apr 28, 2014 at 07:03:28PM +0530, Joydeep Bakshi wrote: >>> >>>> Hello list, >>>> >>>> To get the wan IP in apache log I have already enabl mod_rapf >>> in opensude >>>> server. >>>> >>>> # a2enmod rpaf mod_rpaf >>>> "rpaf" already present >>>> >>>> ?a2enmod ?mod_rpaf >>>> "mod_rpaf" already present >>>> >>>> Here is a nginx vhost section for passing IP to apache log >>>> >>>> [......] >>>> ? proxy_redirect off; # Do not redirect this proxy - It needs >>> to be >>>> pass-through >>>> ? proxy_set_header Host $host; >>>> ? proxy_set_header X-Real-IP $remote_addr; >>>> ? proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >>>> ? proxy_set_header X-Server-Address $server_addr; >>>> ? proxy_pass_header Set-Cookie; >>>> [......] >>>> >>>> After restarting both apache and nginx, the apache log for that >>> specific >>>> vhost still showing 127.0.0.1 as source IP at apache log. >>>> >>>> Am I missing something ? >>> >>> Most notably, you've missed configuration of mod_rpaf. ?It needs >>> to be enabled in configuration, and you have to at least >>> configure >>> IP address it will accept headers from, as well as a header to >>> look into. >>> >>> http://www.stderr.net/apache/rpaf/ [1] >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/ [2] >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx [3] > > > > Links: > ------ > [1] http://www.stderr.net/apache/rpaf/ > [2] http://nginx.org/ > [3] http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Mon Apr 28 14:46:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 28 Apr 2014 18:46:16 +0400 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: References: <20140428135537.GE34696@mdounin.ru> Message-ID: <20140428144616.GF34696@mdounin.ru> Hello! On Mon, Apr 28, 2014 at 07:43:49PM +0530, Joydeep Bakshi wrote: > Even IP get logged when disable the rpaf !!! > little confused. Please read mod_rpaf documentation for further reference, I've already provided a link. It's really not related to nginx and offtopic here. Thank you for cooperation. > > > > > On Mon, Apr 28, 2014 at 7:36 PM, Joydeep Bakshi < > joydeep.bakshi at netzrezepte.de> wrote: > > > Hello Axel & Maxim, > > > > I have modified the apache log format as below > > > > LogFormat "%{X-Forwarded-For}i %l %u %t \"%r\" %>s %b \"%{Referer}i\" > > \"%{User-Agent}i\"" common > > > > and get the source IP at /var/log/apache/access.log > > > > I wonder if I there is any scope to add more info in the common log as it > > is a production server. And also need the same for ErrorLog too. > > > > Any suggestion ? > > > > Thanks > > > > > > > > On Mon, Apr 28, 2014 at 7:25 PM, Maxim Dounin wrote: > > > >> Hello! > >> > >> On Mon, Apr 28, 2014 at 07:03:28PM +0530, Joydeep Bakshi wrote: > >> > >> > Hello list, > >> > > >> > To get the wan IP in apache log I have already enabl mod_rapf in > >> opensude > >> > server. > >> > > >> > # a2enmod rpaf mod_rpaf > >> > "rpaf" already present > >> > > >> > a2enmod mod_rpaf > >> > "mod_rpaf" already present > >> > > >> > Here is a nginx vhost section for passing IP to apache log > >> > > >> > [......] > >> > proxy_redirect off; # Do not redirect this proxy - It needs to be > >> > pass-through > >> > proxy_set_header Host $host; > >> > proxy_set_header X-Real-IP $remote_addr; > >> > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > >> > proxy_set_header X-Server-Address $server_addr; > >> > proxy_pass_header Set-Cookie; > >> > [......] > >> > > >> > After restarting both apache and nginx, the apache log for that specific > >> > vhost still showing 127.0.0.1 as source IP at apache log. > >> > > >> > Am I missing something ? > >> > >> Most notably, you've missed configuration of mod_rpaf. It needs > >> to be enabled in configuration, and you have to at least configure > >> IP address it will accept headers from, as well as a header to > >> look into. > >> > >> http://www.stderr.net/apache/rpaf/ > >> > >> -- > >> Maxim Dounin > >> http://nginx.org/ > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > >> > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Apr 28 15:52:41 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 28 Apr 2014 11:52:41 -0400 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: References: Message-ID: <095efdd7f89482ae96983e2c0e8fc8d1.NginxMailingListEnglish@forum.nginx.org> Might be missing this, from an old Apache config: # Configuration for mod_rpaf RPAFenable On RPAFproxy_ips 192.168.2.123 # RPAFsethostname host.your.domain # End of mod_rpaf. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249698,249710#msg-249710 From siefke_listen at web.de Mon Apr 28 22:58:16 2014 From: siefke_listen at web.de (Silvio Siefke) Date: Tue, 29 Apr 2014 00:58:16 +0200 Subject: 502 Gateway PHP Message-ID: <20140429005816.5b9ecff5e3a748258dee9e5d@web.de> Hello, i try to run a database management system and no matters what i use, i become 502 Bad Gateway. the error log say siefke /var/www/siefke/log $ cat error.log 2014/04/29 00:52:05 [error] 20458#0: *1 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: 10.42.0.20, server: silviosiefke_de, request: "POST /adminer.html HTTP/1.1", upstream: "fastcgi://unix:/var/tmp/php/silviosiefke.de.sock:", host: "silviosiefke_de", referrer: "http://silviosiefke_de/adminer.html" I found strange because all other of php work only the database managements want not work. Phpmyadmin,. sqlbuddy, adminer ever 502. The configuration in nginx of php path: location ~ \.(php|htm|html)$ { try_files $uri =404; fastcgi_pass unix:/var/tmp/php/siefke.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include /etc/nginx/configuration/fastcgi_params; } php config: [siefke] listen = /var/tmp/php/siefke.sock listen.owner = nginx listen.group = nginx listen.mode = 0660 user = siefke group = siefke pm = ondemand pm.max_children = 100 pm.process_idle_timeout = 5s pm.start_servers = 1 pm.min_spare_servers = 1 pm.max_spare_servers = 35 security.limit_extensions = .php .html php_flag[display_errors] = off php_admin_value[error_log] = /var/www/siefke/log/php.log php_admin_flag[log_errors] = on php_admin_value[memory_limit] = 32M php_admin_value[open_basedir] = /var/www/siefke:/usr/share/php php_admin_value[session.save_path]= /var/www/siefke/tmp php_admin_value[include_path] = /var/www/siefke/inc/php:/usr/share/php Contao run without problems and the small scripts of php too. Has someone an advice what running wrong? Thank you for help & Nice Day Silvio From stl at wiredrive.com Mon Apr 28 23:06:48 2014 From: stl at wiredrive.com (Scott Larson) Date: Mon, 28 Apr 2014 16:06:48 -0700 Subject: 502 Gateway PHP In-Reply-To: <20140429005816.5b9ecff5e3a748258dee9e5d@web.de> References: <20140429005816.5b9ecff5e3a748258dee9e5d@web.de> Message-ID: I'm not personally a fan of telling nginx to glob all .html files for PHP processing, but maybe that's just me and unrelated. If other PHP apps are working I'd dig into the logging for that. Generally when I run into situations like this it has nothing to do with nginx and instead is something within the PHP config. Exceeding memory is a frequent culprit, followed by the app not liking the values sent to it within SCRIPT_FILENAME or PATH_INFO. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Mon, Apr 28, 2014 at 3:58 PM, Silvio Siefke wrote: > Hello, > > i try to run a database management system and no matters what i use, i > become 502 Bad Gateway. > > the error log say > siefke /var/www/siefke/log $ cat error.log > 2014/04/29 00:52:05 [error] 20458#0: *1 recv() failed (104: Connection > reset by peer) while reading response header from upstream, client: > 10.42.0.20, server: silviosiefke_de, request: "POST /adminer.html > HTTP/1.1", upstream: "fastcgi://unix:/var/tmp/php/silviosiefke.de.sock:", > host: "silviosiefke_de", referrer: "http://silviosiefke_de/adminer.html" > > I found strange because all other of php work only the database managements > want not work. Phpmyadmin,. sqlbuddy, adminer ever 502. > > The configuration in nginx of php path: > location ~ \.(php|htm|html)$ { > try_files $uri =404; > fastcgi_pass unix:/var/tmp/php/siefke.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include /etc/nginx/configuration/fastcgi_params; > } > > php config: > [siefke] > listen = /var/tmp/php/siefke.sock > listen.owner = nginx > listen.group = nginx > listen.mode = 0660 > user = siefke > group = siefke > pm = ondemand > pm.max_children = 100 > pm.process_idle_timeout = 5s > pm.start_servers = 1 > pm.min_spare_servers = 1 > pm.max_spare_servers = 35 > security.limit_extensions = .php .html > php_flag[display_errors] = off > php_admin_value[error_log] = /var/www/siefke/log/php.log > php_admin_flag[log_errors] = on > php_admin_value[memory_limit] = 32M > php_admin_value[open_basedir] = /var/www/siefke:/usr/share/php > php_admin_value[session.save_path]= /var/www/siefke/tmp > php_admin_value[include_path] = /var/www/siefke/inc/php:/usr/share/php > > > Contao run without problems and the small scripts of php too. Has someone > an advice what running wrong? > > Thank you for help & Nice Day > Silvio > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Tue Apr 29 05:41:28 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 29 Apr 2014 11:11:28 +0530 Subject: The http output chain is empty. Message-ID: Hello! Could some one explain what could be the reason for below alert and when exactly it can occur ? "the http output chain is empty" I have been noticing this alert in error log of Nginx-1.5.12. Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Tue Apr 29 06:58:59 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 29 Apr 2014 12:28:59 +0530 Subject: mod_rpaf enabled; still apache log showing 127.0.0.1 as source !! In-Reply-To: <095efdd7f89482ae96983e2c0e8fc8d1.NginxMailingListEnglish@forum.nginx.org> References: <095efdd7f89482ae96983e2c0e8fc8d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello list, Thanks a lot. After following your suggestions and the link Maxim shared, I have compiled the module in my server as well as put the required configuration at httpd.conf. Now the WAN IP appears at access.log of apache. BTW: the error log still comes with local IP, any way to get remote IP in this log ? Once again many thanks to you all. On Mon, Apr 28, 2014 at 9:22 PM, itpp2012 wrote: > Might be missing this, from an old Apache config: > > # Configuration for mod_rpaf > > RPAFenable On > RPAFproxy_ips 192.168.2.123 > # RPAFsethostname host.your.domain > > # End of mod_rpaf. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249698,249710#msg-249710 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 29 07:53:16 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 29 Apr 2014 03:53:16 -0400 Subject: The http output chain is empty. In-Reply-To: References: Message-ID: <66c78867d534e1876428f94e7dfd67c8.NginxMailingListEnglish@forum.nginx.org> Maybe related to http://trac.nginx.org/nginx/ticket/132 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249721,249725#msg-249725 From joydeep.bakshi at netzrezepte.de Tue Apr 29 10:52:45 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Tue, 29 Apr 2014 16:22:45 +0530 Subject: nginx logging with huge vhosts Message-ID: Hello, How to log both access & error efficiently with nginx having huge vhost ? Is there any CustomLog available to collect the combined_vhost information ? Is it good configuring separate access & error logs when huge vhost ? Please suggest. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Apr 29 11:31:38 2014 From: nginx-forum at nginx.us (crespin) Date: Tue, 29 Apr 2014 07:31:38 -0400 Subject: Be aware of disconnection Message-ID: <5ab8a65a5456b3acbf13528d00b093e3.NginxMailingListEnglish@forum.nginx.org> Hello, I'm write a module storing requests in an memory array. An internal thread send request in a cluster. I need to be aware of deconnection for specific post-treatment. I'm unable to find any API. How to setup a handle on disconnection ? Thanks and regards Yves Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249728,249728#msg-249728 From mdounin at mdounin.ru Tue Apr 29 11:50:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Apr 2014 15:50:04 +0400 Subject: Be aware of disconnection In-Reply-To: <5ab8a65a5456b3acbf13528d00b093e3.NginxMailingListEnglish@forum.nginx.org> References: <5ab8a65a5456b3acbf13528d00b093e3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140429115004.GQ34696@mdounin.ru> Hello! On Tue, Apr 29, 2014 at 07:31:38AM -0400, crespin wrote: > Hello, > I'm write a module storing requests in an memory array. An internal thread > send request in a cluster. > I need to be aware of deconnection for specific post-treatment. I'm unable > to find any API. How to setup a handle on disconnection ? Install a cleanup handler on a connection or request pool. Take a look at limit_conn module to see an example: http://hg.nginx.org/nginx/file/3a48775f1535/src/http/modules/ngx_http_limit_conn_module.c#l258 -- Maxim Dounin http://nginx.org/ From richard at kearsley.me Tue Apr 29 12:28:37 2014 From: richard at kearsley.me (Richard Kearsley) Date: Tue, 29 Apr 2014 13:28:37 +0100 Subject: nginx SSL/SNI phase In-Reply-To: <55cde0a8acab19dcce1620956258f3c1.NginxMailingListEnglish@forum.nginx.org> References: <532694C4.4050909@kearsley.me> <55cde0a8acab19dcce1620956258f3c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <535F9AF5.4020509@kearsley.me> On 26/03/14 14:09, stremovsky wrote: > I think it can be a great feature for big production environments ! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,248429,248722#msg-248722 > exactly.. I noticed a few updates to SNI in the latest releases, do any of them take us closer to this? Thanks Richard From makailol7 at gmail.com Tue Apr 29 12:39:36 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 29 Apr 2014 18:09:36 +0530 Subject: The http output chain is empty. In-Reply-To: <66c78867d534e1876428f94e7dfd67c8.NginxMailingListEnglish@forum.nginx.org> References: <66c78867d534e1876428f94e7dfd67c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, I am not sure how to debug this issue, could someone help me with this? This is my Nginx configure argument. # nginx -V nginx version: nginx/1.5.12 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-http_spdy_module --add-module=/rpmbuild/BUILD/nginx-1.5.12/ngx_http_substitutions_filter_module --add-module=/rpmbuild/BUILD/nginx-1.5.12/ngx_devel_kit-master --add-module=/rpmbuild/BUILD/nginx-1.5.12/lua-nginx-module-master Thanks, Makailol On Tue, Apr 29, 2014 at 1:23 PM, itpp2012 wrote: > Maybe related to http://trac.nginx.org/nginx/ticket/132 > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,249721,249725#msg-249725 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 29 12:43:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Apr 2014 16:43:47 +0400 Subject: The http output chain is empty. In-Reply-To: References: <66c78867d534e1876428f94e7dfd67c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140429124347.GT34696@mdounin.ru> Hello! On Tue, Apr 29, 2014 at 06:09:36PM +0530, Makailol Charls wrote: > Hi, > > I am not sure how to debug this issue, could someone help me with this? > > This is my Nginx configure argument. > > # nginx -V > nginx version: nginx/1.5.12 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_auth_request_module --with-mail --with-mail_ssl_module > --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-http_spdy_module > --add-module=/rpmbuild/BUILD/nginx-1.5.12/ngx_http_substitutions_filter_module > --add-module=/rpmbuild/BUILD/nginx-1.5.12/ngx_devel_kit-master > --add-module=/rpmbuild/BUILD/nginx-1.5.12/lua-nginx-module-master First of all, I would recommend you to try reproducing the problem without 3rd party modules. Some more debugging hints can be found here: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Apr 29 13:13:05 2014 From: nginx-forum at nginx.us (crespin) Date: Tue, 29 Apr 2014 09:13:05 -0400 Subject: Be aware of disconnection In-Reply-To: <20140429115004.GQ34696@mdounin.ru> References: <20140429115004.GQ34696@mdounin.ru> Message-ID: <6bc29c460243744788a67e358000516c.NginxMailingListEnglish@forum.nginx.org> > Install a cleanup handler on a connection or request pool. Take > a look at limit_conn module to see an example: > > http://hg.nginx.org/nginx/file/3a48775f1535/src/http/modules/ngx_http_ > limit_conn_module.c#l258 Thanks ! It's easy to implement. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249728,249734#msg-249734 From nginx-forum at nginx.us Tue Apr 29 13:50:42 2014 From: nginx-forum at nginx.us (crespin) Date: Tue, 29 Apr 2014 09:50:42 -0400 Subject: Asynchronously send a reply Message-ID: <674232f7aed708956162e52ec687161b.NginxMailingListEnglish@forum.nginx.org> Hello, I wrote a module that records requests that are processed within a cluster. Is it possible to set a handle to be regularly reminded and when the response is available to transmit. When the data are not yet available, the handle assumes the data transmission. Thanks and regards, yves Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249735,249735#msg-249735 From chima.s at gmail.com Tue Apr 29 15:46:40 2014 From: chima.s at gmail.com (chima s) Date: Tue, 29 Apr 2014 21:16:40 +0530 Subject: nginx reverse proxy hangs Message-ID: I've set up nginx as a reverse proxy for a jboss service and also serves Static Pages. Its stop working after keeping server idle for some time. During the time, when i call static page it serves.Also tried calling the jboss directly it works. Only the dynamic contents through reverse proxy doesn't work. Once i restart the nginx service, it will start work. Enabled the debug log, but did not find any error, except 499 in access log as client closing the connection as they did not see anything. Both the nginx and jboss not loaded, hardly gets request. Is this a bug? I'm using nginx 1.4.7-1~precise on Ubuntu 12.04 LTS Thanks for any help! -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Apr 29 16:18:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Apr 2014 20:18:50 +0400 Subject: nginx reverse proxy hangs In-Reply-To: References: Message-ID: <20140429161850.GW34696@mdounin.ru> Hello! On Tue, Apr 29, 2014 at 09:16:40PM +0530, chima s wrote: > I've set up nginx as a reverse proxy for a jboss service and also serves > Static Pages. Its stop working after keeping server idle for some time. > > During the time, when i call static page it serves.Also tried calling the > jboss directly it works. > > Only the dynamic contents through reverse proxy doesn't work. > > Once i restart the nginx service, it will start work. > > Enabled the debug log, but did not find any error, except 499 in access log > as client closing the connection as they did not see anything. > > Both the nginx and jboss not loaded, hardly gets request. > > Is this a bug? > > I'm using nginx 1.4.7-1~precise on Ubuntu 12.04 LTS http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Apr 29 17:50:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 29 Apr 2014 21:50:18 +0400 Subject: nginx logging with huge vhosts In-Reply-To: References: Message-ID: <20140429175018.GX34696@mdounin.ru> Hello! On Tue, Apr 29, 2014 at 04:22:45PM +0530, Joydeep Bakshi wrote: > Hello, > > How to log both access & error efficiently with nginx having huge vhost ? > Is there any CustomLog available to collect the combined_vhost information ? > > Is it good configuring separate access & error logs when huge vhost ? Sorry, I've failed to understand your questions. On the other hand, most likely there are answers in the documentation, see here: http://nginx.org/r/error_log http://nginx.org/r/access_log http://nginx.org/r/log_format -- Maxim Dounin http://nginx.org/ From agentzh at gmail.com Tue Apr 29 19:25:29 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 29 Apr 2014 12:25:29 -0700 Subject: problem with echo_before when proxying a server which sends gzipped content In-Reply-To: <799202C5-E1F5-4A1A-899B-0E9052AB7455@intolabs.net> References: <799202C5-E1F5-4A1A-899B-0E9052AB7455@intolabs.net> Message-ID: Hello! On Thu, Apr 3, 2014 at 8:14 AM, Carsten Germer wrote: > yes, it works with suppressing gzip between nginx and source-server with "proxy_set_header Accept-Encoding "deflate";" > Thanks a bunch! > I was aiming for a solution that preserves the gzip-compression between source and cache, but I'm caching long time, anyway. > >> add_before_body / add_after_body > I did look at those before but I'd still have to find a way to get the $arg_callback to the URIs and output them with echo for the whole solution. As the author of both the ngx_echo and ngx_xss modules, I'd recommend ngx_xss module for your JSONP use case: https://github.com/openresty/xss-nginx-module#readme It should be a bit faster and also safer for this very case. Regards, -agentzh From nginx-forum at nginx.us Tue Apr 29 20:27:36 2014 From: nginx-forum at nginx.us (nrahl) Date: Tue, 29 Apr 2014 16:27:36 -0400 Subject: Wordpress Multi-Site Converting Apache to Nginx Message-ID: I'm trying to move a working Apache2 config to Nginx and failing miserably. The site has a main application, a custom CMS and a wordpress multi-site with a few blogs. The CMS rules are setup and working, but I can't get the wordpress rules to 1. Work and 2. Get executed ebcause the CMS rules are being too greedy. The structure is: / - root is the CMS else, not wordpress /*Anything*/ - Pass to CMS controller if no other rules or file matches /*Anything*/*Something*/ - Pass to CMS controller if no other rules or file matches /wordpress/ - the real folder with wordpress files, should *not* be accessed directly /about/ - a fake (rewrite) directory for one of the multi-sites /blog/ a fake (rewrite) directory for one of the multi-sites This is the Apache2 rule set for WordPress that works: RewriteRule ^/(wp-(content|admin|includes).*) /wordpress/$1 [L] # MultiSites RewriteRule ^/about/(wp-(content|admin|includes).*) /wordpress/$1 [L] RewriteRule ^/about/(.*\.php)$ /wordpress/$1 [L] RewriteRule ^/about/(.*) /wordpress/index.php [L] RewriteRule ^/blog/(wp-(content|admin|includes).*) /wordpress/$1 [L] RewriteRule ^/blog/(.*\.php)$ /wordpress/$1 [L] RewriteRule ^/blog/(.*) /wordpress/index.php [L] RewriteRule ^/blog/tag/(.*)/ /wordpress/index.php [L] We hard code the first slug of the URL to prevent the CMS's regex rules from trying to grab it. On the new server, with my new config, I can go to /wordpress/ and get the master blog, and can login to the master blog. Some of the links want to go to a URL like /wp-admin/network/ and it should silently redirect to /wordpress/wp-admin/network/. That's what the first apache rule does. I've tried to create an Nginx rule: location ^~ ^/(wp-(content|admin|includes).*) { try_files /wordpress/$1 =404; } But I get a 404. So its matching the pattern but not loading the redirected content. I am also having trouble getting the URL aliases to do anything. /blog/ should silently be /wordpress/blog/, so that Wordpress sees it as if it were /wordpress/blog/. At present, I can't get to /wordpress/blog/ directly or through the alias /blog/. I've been trying things like: location ^~ ^/blog/([a-zA-Z0-9\-\_]+)/ { try_files /wordpress/$1.html /wordpress/$1/; } .... without any sucess. I also have some pattern matching functions that insist on matching URLs they aren't supposed to. The ^~ is supposed to make the rule a higher priority, but it does not work. I have rules such as: location ~ ^/([a-zA-Z0-9\-\_]+)/$ { try_files /cache/$1.html $uri $uri/ /Director.php?rt=$1; } that should match any single slug, like /something/ and pass it to a non-wordpress site UNLESS it is already in another rule, like /blog/. It shouldn't match /blog/ because /blog/ has its own rule that should take priority. Right now, that is not the case and the pattern location block is being too greedy. These are all the location blocks I have on the server: index Director.php index.php; location ^~ ^/(wp-(content|admin|includes).*) { try_files /wordpress/$1 =404; } location ^~ ^/blog/([a-zA-Z0-9\-\_]+)/ { try_files /wordpress/$1.html /wordpress/$1/; } # Attempt to match Slugs for Director location ~ ^/([a-zA-Z0-9\-\_]+)/$ { try_files /cache/$1.html $uri $uri/ /Director.php?rt=$1; } location ~ ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ { try_files $uri $uri/ /Director.php?rt=$1&action=$2; } location / { try_files $uri $uri/ /index.html; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { # Zero-day exploit defense. # http://forum.nginx.org/read.php?2,88845,page=3 try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } # ERROR HANDLING error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # deny access to .htaccess files location ~ /\.ht { deny all; } What am I doing wrong? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249743,249743#msg-249743 From agentzh at gmail.com Tue Apr 29 21:47:12 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 29 Apr 2014 14:47:12 -0700 Subject: [ANN] OpenResty 1.5.12.1 released Message-ID: Hi guys! I am happy to announce the new 1.5.12.1 release of the OpenResty bundle: http://openresty.org/#Download Special thanks go to all our contributors for making this happen! Our current focus has still been on improving both performance and stability. And most of our OpenResty sub-projects' git repositories have been moved over to the GitHub account "openresty": https://github.com/openresty/ Thanks Kindy Lin for acquiring this GitHub account :) Below is the complete change log for this release, as compared to the last formal release, 1.5.11.1: * upgraded the Nginx core to 1.5.12. * see the changes here: * upgraded LuaJIT to v2.1-20140423 (see https://github.com/openresty/luajit2/releases ). * bugfix: prevent adding side traces for stack checks. (Mike pall) this could cause internal assertion failure in the JIT compiler while replaying snapshots in very obscure cases: "lj_snap.c:497: lj_snap_replay: Assertion `ir->o == IR_CONV && ir->op2 == ((IRT_NUM<<5)|IRT_INT)' failed." * bugfix: fixed FOLD of string concatenations. (Mike Pall) this issue was reported by leafo and could lead to invalid string results in special cases while compiling string concatenations. * bugfix: FFI: fixed cdata equality comparison against strings and other Lua types. (Mike Pall) * bugfix: fixed top slot calculation for snapshots with continuations. (Mike Pall) this was a bug in snapshot generation, but it only surfaced with trace stitching. it could cause Lua stack overwrites in special cases. * bugfix: PPC: don't use mcrxr on PPE. (Mike Pall) * bugfix: prevent GC estimate miscalculation due to buffer growth. (Mike Pall) * bugfix: fixed the regression introduced by the previous fix for "reuse of SCEV results in FORL". (Mike Pall) this could cause internal assertion failure in the JIT compiler: "lj_record.c:68: rec_check_ir: Assertion `op2 >= nk' failed." * bugfix: fixed alias analysis for "table.len" vs. "table.clear". (Mike Pall) this could cause "table.len" to return incorrect values (nonzero values) after "table.clear" was performed. * bugfix: fixed the compatibility with DragonFlyBSD. thanks lhmwzy for the patch. * feature: allow non-scalar cdata to be compared for equality by address. (Mike Pall) * upgraded LuaUpstreamNginxModule to 0.02. * bugfix: upstream names did not support taking a port number. thanks magicleo for the report. * upgraded Redis2NginxModule to 0.11. * change: now we always ignore client aborts for collaborations with other modules like SrcacheNginxModule. thanks akamatgi for the report. * upgraded LuaNginxModule to 0.9.7. * bugfix: when lua_code_cache was off, cosocket:setkeepalive() might lead to segmentation faults. thanks Kelvin Peng for the report. * refactor: improved the error handling and logging in the Lua code loader and closure factory. * change: added stronger assertions to the stream-typed cosocket implementation. * optimize: we no longer call "ngx_pfree()" in our own "pcre_free" hook. * optimize: we no longer clear the pointer "ctx->user_co_ctx" in "ngx_http_lua_reset_ctx". * upgraded EchoNginxModule to 0.53. * bugfix: use of empty arguments after the "-n" option of the echo directive (and its friends) might cause subsequent arguments to get discarded. thanks Lice Pan for the report and fix. * upgraded FormInputNginxModule to 0.08. * bugfix: segmentation fault might happen when "set_form_input_multi" was used while no proper "Content-Type" request header was given. * upgraded LuaRestyWebSocketLibrary to 0.03. * optimize: added a minor optimization in the recv_frame() method. thanks yurnerola for the catch. * upgraded LuaRestyCoreLibrary to 0.0.6. * optimize: ngx.re.sub/ngx.re.gsub: now we avoid constructing new Lua strings for the regex cache keys, which gives 5% speedup for trivial use cases. * optimize: ngx.re.match/ngx.re.find: avoided constructing a new Lua string for the regex cache key by switching over to a cascaded 2-level hash table, which gives 22% speedup for simple use cases. * upgraded LuaRestyLockLibrary to 0.03. * bugfix: prevented using cdata directly as table keys. * upgraded LuaRestyStringLibrary to 0.09. * bugfix: avoided using the "module" builtin function to define lua modules. thanks lhmwzy for the original patch. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1005012 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Just a side note: I recently added the "Profiling page" to the openresty.org site where you can find useful Flame Graph tools for analyzing and optimizing OpenResty web apps' online performance (as well as ordinary nginx and even other C/C++ user processes): http://openresty.org/#Profiling We've been heavily relying on these tools to get our Lua CDN and Lua WAF faster and faster in the past year at CloudFlare. Have fun! -agentzh From moseleymark at gmail.com Tue Apr 29 22:25:55 2014 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 29 Apr 2014 15:25:55 -0700 Subject: Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac Message-ID: I'm running into a lot of the same error as was reported in the forum at: http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004385.html > SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac I've got an nginx server doing front-end SSL, with the upstream also over SSL and also nginx (fronting Apache). They're all running 1.5.13 (all Precise 64-bit), so I can goof with various options like ssl_buffer_size. These are running SSL-enabled web sites for my customers. I'm curious if there is any workaround for this besides patching openssl, as mentioned a couple of weeks ago in http://trac.nginx.org/nginx/ticket/215 In the wake of heartbleed, I'm not super excited about rolling my own openssl/libssl packages (and straying from easy updates), but I also need to put a lid on these SSL errors. I've also not tested yet to verify that the openssl patch fixes my issue (wanted to check here first). Like the forum notes, they seem to happen just in larger files (I've not dug extensively, but every one that I've seen is usually at least a 500k file). I've also noticed that if I request *just* the file, it seems to succeed every time. It's only when it's downloading a number of other files that it seems to occur. On a lark, I tried turning off front-end keepalives but that didn't make any difference. I've been playing with the ssl_buffer_size on both the frontend (which is where the errors show up) and the upstream servers to see if there was a magic combination, but no combo makes things happy. Am I doomed to patch openssl? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Apr 29 22:41:34 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 29 Apr 2014 23:41:34 +0100 Subject: Wordpress Multi-Site Converting Apache to Nginx In-Reply-To: References: Message-ID: <20140429224134.GQ16942@daoine.org> On Tue, Apr 29, 2014 at 04:27:36PM -0400, nrahl wrote: Hi there, > Some of the links want to go to a URL like /wp-admin/network/ and it should > silently redirect to /wordpress/wp-admin/network/. That's what the first > apache rule does. I've tried to create an Nginx rule: > > location ^~ ^/(wp-(content|admin|includes).*) { > try_files /wordpress/$1 =404; > } > > But I get a 404. So its matching the pattern but not loading the redirected > content. That's not what is happening. "location ^~" is a prefix match, not a regex match, so this location is probably not being used by anything. > I also have some pattern matching functions that insist on matching URLs > they aren't supposed to. The ^~ is supposed to make the rule a higher > priority, but it does not work. That's not what ^~ means. > These are all the location blocks I have on the server: > > index Director.php index.php; > > location ^~ ^/(wp-(content|admin|includes).*) { That probably won't match any request. > location ^~ ^/blog/([a-zA-Z0-9\-\_]+)/ { That probably won't match any request. > # Attempt to match Slugs for Director > location ~ ^/([a-zA-Z0-9\-\_]+)/$ { That should match requests of the form /XXX/. > location ~ ^/([a-zA-Z0-9\-\_]+)/([a-zA-Z0-9\-\_]+)/$ { That should match requests of the form /XXX/YYY/. > location / { That should match any request that does not otherwise match a location. > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > location ~ \.php$ { That should match a request of the form /XXX.php. > What am I doing wrong? Misunderstanding what "location" does? http://nginx.org/r/location What request do you make? Which one of the above locations does the request match? What output do you expect? What output do you get? (And what do the logs say?) f -- Francis Daly francis at daoine.org From luky-37 at hotmail.com Tue Apr 29 23:36:10 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 30 Apr 2014 01:36:10 +0200 Subject: Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac In-Reply-To: References: Message-ID: Hi Mark, > I'm running into a lot of the same error as was reported in the forum > at: http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004385.html > >> SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or > bad record mac > > I've got an nginx server doing front-end SSL, with the upstream also > over SSL and also nginx (fronting Apache). They're all running 1.5.13 > (all Precise 64-bit), so I can goof with various options like > ssl_buffer_size. These are running SSL-enabled web sites for my > customers. > > I'm curious if there is any workaround for this besides patching > openssl, as mentioned a couple of weeks ago > in http://trac.nginx.org/nginx/ticket/215 A patch was committed to openssl [1] and backported to the openssl-1.0.1 stable branch [2], meaning that the next openssl release (1.0.1h) will contain the fix. You can: - cherry-pick the fix and apply it on 1.0.1g - use the 1.0.1 stable git branch - asking your openssl package maintainer to backport the fix (its security ? relevant, see CVE-2010-5298 [3]) The fix is already in OpenBSD [4], Debian and Ubuntu will probably ship the patch soon, also see [5] and [6]. Regards, Lukas [1] http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=94d1f4b0f3d262edf1cf7023a01d5404945035d5 [2] http://git.openssl.org/gitweb/?p=openssl.git;a=commitdiff;h=725c5f1ad393a7bc344348d0ec7c268aaf2700a7 [3] http://web.nvd.nist.gov/view/vuln/detail?vulnId=CVE-2010-5298 [4] http://ftp.openbsd.org/pub/OpenBSD/patches/5.4/common/008_openssl.patch [5] https://www.debian.org/security/2014/dsa-2908 [6] http://people.canonical.com/~ubuntu-security/cve/2010/CVE-2010-5298.html From moseleymark at gmail.com Wed Apr 30 00:20:43 2014 From: moseleymark at gmail.com (Mark Moseley) Date: Tue, 29 Apr 2014 17:20:43 -0700 Subject: Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac In-Reply-To: References: Message-ID: On Tue, Apr 29, 2014 at 4:36 PM, Lukas Tribus wrote: > Hi Mark, > > > > I'm running into a lot of the same error as was reported in the forum > > at: > http://mailman.nginx.org/pipermail/nginx-devel/2013-October/004385.html > > > >> SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or > > bad record mac > > > > I've got an nginx server doing front-end SSL, with the upstream also > > over SSL and also nginx (fronting Apache). They're all running 1.5.13 > > (all Precise 64-bit), so I can goof with various options like > > ssl_buffer_size. These are running SSL-enabled web sites for my > > customers. > > > > I'm curious if there is any workaround for this besides patching > > openssl, as mentioned a couple of weeks ago > > in http://trac.nginx.org/nginx/ticket/215 > > > A patch was committed to openssl [1] and backported to the openssl-1.0.1 > stable branch [2], meaning that the next openssl release (1.0.1h) will > contain the fix. > > You can: > - cherry-pick the fix and apply it on 1.0.1g > - use the 1.0.1 stable git branch > - asking your openssl package maintainer to backport the fix (its security > relevant, see CVE-2010-5298 [3]) > > The fix is already in OpenBSD [4], Debian and Ubuntu will probably ship the > patch soon, also see [5] and [6]. > > > Oh, cool, that's good news that it's upstream then. Getting the patch to apply is a piece of cake. I was more worried about what would happen for the next libssl update. Hopefully Ubuntu will pick that update up. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From nrahl at aquagear.com Wed Apr 30 03:33:57 2014 From: nrahl at aquagear.com (Nick Rahl) Date: Tue, 29 Apr 2014 23:33:57 -0400 Subject: Wordpress Multi-Site Converting Apache to Nginx Message-ID: <53606F25.9070905@aquagear.com> Side Note: It appears the forum is down.http://forum.nginx.org says, "The database connection failed. Please check your database configuration in include/db/config.php. If the configuration is okay, check if the database server is running." > That's not what ^~ means. The manual says, "If the longest matching prefix location has the "|^~|" modifier then regular expressions are not checked". Which means that a ^~ location will have a higher priority than a regular expression rule, right? > What request do you make? > Which one of the above locations does the request match? > What output do you expect? > What output do you get? First rule: location ^~ /wordpress/ { try_files $uri /wordpress/index.php =404; } What I intend: "If the URL starts with "/wordpress/", then do not check any regular expression rules. Instead, load the requested URI directly." What happens: when visiting "wordpress/" I get a blank page. The Log: 2014/04/30 02:39:06 [debug] 27354#0: post event 00007F68E8FD9010 2014/04/30 02:39:06 [debug] 27354#0: delete posted event 00007F68E8FD9010 2014/04/30 02:39:06 [debug] 27354#0: accept on 0.0.0.0:80, ready: 1 2014/04/30 02:39:06 [debug] 27354#0: posix_memalign: 0000000000E79F90:256 @16 2014/04/30 02:39:06 [debug] 27354#0: *5 accept: [REMOVED MY IP] fd:9 2014/04/30 02:39:06 [debug] 27354#0: *5 event timer add: 9: 20000:1398825566351 2014/04/30 02:39:06 [debug] 27354#0: *5 reusable connection: 1 2014/04/30 02:39:06 [debug] 27354#0: *5 epoll add event: fd:9 op:1 ev:80000001 2014/04/30 02:39:06 [debug] 27354#0: accept() not ready (11: Resource temporarily unavailable) That's it, no error. Second Problem Rule: I have updated this rule to be a regex rule: llocation ~ ^/(wp-(content|admin|includes).*) { try_files /wordpress/$1 /wordpress/$1/index.php =404; } What I intend: "If /wp-content followed by anything, or /wp-admin followed by anything, or /wp-includes followed by anything: Silently (no browser redirect) show the same URL as if a /wordpress/ directory were inserted in front. ie. "/wp-admin/bob" becomes "/wordpress/wp-admin/bob/", but does not do a redirect. If a directory, load index.php inside the given directoy." What happens: I visit /wp-admin/ and get wp-admin.php, dumped as plain text. It's not passing to PHP. But I have a PHP block: location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php5-fpm.sock; } So it should be passing to PHP? -------------- next part -------------- An HTML attachment was scrubbed... URL: From joydeep.bakshi at netzrezepte.de Wed Apr 30 04:56:09 2014 From: joydeep.bakshi at netzrezepte.de (Joydeep Bakshi) Date: Wed, 30 Apr 2014 10:26:09 +0530 Subject: nginx logging with huge vhosts In-Reply-To: <20140429175018.GX34696@mdounin.ru> References: <20140429175018.GX34696@mdounin.ru> Message-ID: Hello Maxim, Presently I have configured separate access & error log for each & every nginx vhost. I wonder if there is other alternative which can log all vhosts into a common access & error log and later split them according to vhost for easy debugging. Thanks On Tue, Apr 29, 2014 at 11:20 PM, Maxim Dounin wrote: > Hello! > > On Tue, Apr 29, 2014 at 04:22:45PM +0530, Joydeep Bakshi wrote: > > > Hello, > > > > How to log both access & error efficiently with nginx having huge vhost > ? > > Is there any CustomLog available to collect the combined_vhost > information ? > > > > Is it good configuring separate access & error logs when huge vhost ? > > Sorry, I've failed to understand your questions. > On the other hand, most likely there are answers in the > documentation, see here: > > http://nginx.org/r/error_log > http://nginx.org/r/access_log > http://nginx.org/r/log_format > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 30 07:04:11 2014 From: nginx-forum at nginx.us (kay) Date: Wed, 30 Apr 2014 03:04:11 -0400 Subject: nginx rewrites $request_method on error Message-ID: <184105e4333691b715b382cb27860f9f.NginxMailingListEnglish@forum.nginx.org> user nginx; worker_processes 1; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; # LET'S LOG $request_method # log_format main '$request_method $remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; #HERE IS WHAT MAKES A BUG# error_page 405 /error.html; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; #gzip on; server { listen 80; if ($request_method != GET) { return 405; } location / { root /etc/nginx/www; } } } What we get: 1) curl -X POST localhost log: GET 127.0.0.1 - - [30/Apr/2014:10:55:03 +0400] "POST / HTTP/1.1" 405 0 "-" "curl/7.26.0" "-" 2) curl -X TRACE localhost log: GET 127.0.0.1 - - [30/Apr/2014:10:55:10 +0400] "TRACE / HTTP/1.1" 405 0 "-" "curl/7.26.0" "-" 3) curl -X PUT localhost log: GET 127.0.0.1 - - [30/Apr/2014:10:55:18 +0400] "PUT / HTTP/1.1" 405 0 "-" "curl/7.26.0" "-" Let's disable "if": 1) curl -X POST localhost GET 127.0.0.1 - - [30/Apr/2014:10:58:48 +0400] "POST / HTTP/1.1" 405 0 "-" "curl/7.26.0" "-" 2) curl -X TRACE localhost GET 127.0.0.1 - - [30/Apr/2014:10:59:12 +0400] "TRACE / HTTP/1.1" 405 0 "-" "curl/7.26.0" "-" 3) curl -X PUT localhost GET 127.0.0.1 - - [30/Apr/2014:10:59:38 +0400] "PUT / HTTP/1.1" 405 0 "-" "curl/7.26.0" "-" Let's disable "error_page 405 /error.html;" 1) curl -X POST localhost POST 127.0.0.1 - - [30/Apr/2014:11:00:43 +0400] "POST / HTTP/1.1" 405 172 "-" "curl/7.26.0" "-" 2) curl -X TRACE localhost TRACE 127.0.0.1 - - [30/Apr/2014:11:01:02 +0400] "TRACE / HTTP/1.1" 405 172 "-" "curl/7.26.0" "-" 3) curl -X PUT localhost PUT 127.0.0.1 - - [30/Apr/2014:11:01:16 +0400] "PUT / HTTP/1.1" 405 172 "-" "curl/7.26.0" "-" So it seems that error_page 405 /error.html rewrites $request_method Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,249754#msg-249754 From francis at daoine.org Wed Apr 30 07:15:37 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 30 Apr 2014 08:15:37 +0100 Subject: Wordpress Multi-Site Converting Apache to Nginx In-Reply-To: <53606F25.9070905@aquagear.com> References: <53606F25.9070905@aquagear.com> Message-ID: <20140430071537.GR16942@daoine.org> On Tue, Apr 29, 2014 at 11:33:57PM -0400, Nick Rahl wrote: Hi there, > >That's not what ^~ means. > > The manual says, "If the longest matching prefix location has the > "|^~|" modifier then regular expressions are not checked". Which > means that a ^~ location will have a higher priority than a regular > expression rule, right? Correct. > location ^~ /wordpress/ { > try_files $uri /wordpress/index.php =404; > } > > What I intend: "If the URL starts with "/wordpress/", then do not > check any regular expression rules. Instead, load the requested URI > directly." Ok, so long as your idea of "load the requested uri" matches nginx's. According to http://nginx.org/r/try_files, that should probably try to send you the content of /usr/local/nginx/html/wordpress/index.php or 404. > What happens: when visiting "wordpress/" I get a blank page. What does curl -v http://whatever/wordpress/ show? Is your blank page a http 200 with no content, or a http 200 with some content that the browser shows as blank, or some other http response? > I have updated this rule to be a regex rule: > > llocation ~ ^/(wp-(content|admin|includes).*) { > try_files /wordpress/$1 /wordpress/$1/index.php =404; > } > > > What I intend: "If /wp-content followed by anything, or /wp-admin > followed by anything, or /wp-includes followed by anything: Silently > (no browser redirect) show the same URL as if a /wordpress/ > directory were inserted in front. ie. "/wp-admin/bob" becomes > "/wordpress/wp-admin/bob/", but does not do a redirect. If a > directory, load index.php inside the given directoy." try_files probably doesn't do what you think it does. The final argument does an internal rewrite. The other arguments are served as files in the current context, unless they end in "/" in which case the "index" value is involved. > What happens: I visit /wp-admin/ and get wp-admin.php, dumped as plain text. curl -v http://whatever/wp-admin/ The logs will show which location is used. Can you see which file-on-the-filesystem wp-admin.php is returned? (I don't see anything in your provided config which would associate wp-admin.php with a request for /wp-admin/. I would have expected the file /usr/local/nginx/html/wordpress/wp-admin/index.php to be returned.) > It's not passing to PHP. The initial request will have a "location" chosen. A subrequest will have a "location" chosen. "just serve this file" will not have a "location" chosen. > But I have a PHP block: > > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_index index.php; For info: those two lines probably don't do anything useful here. > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass unix:/var/run/php5-fpm.sock; > } > > So it should be passing to PHP? Not unless there's a request or subrequest that best-matches this location. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Apr 30 07:21:05 2014 From: nginx-forum at nginx.us (kay) Date: Wed, 30 Apr 2014 03:21:05 -0400 Subject: nginx rewrites $request_method on error In-Reply-To: <184105e4333691b715b382cb27860f9f.NginxMailingListEnglish@forum.nginx.org> References: <184105e4333691b715b382cb27860f9f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <44972b413aa31ff328e8da3bc5e121d5.NginxMailingListEnglish@forum.nginx.org> nginx from official repository: nginx -V nginx version: nginx/1.6.0 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249754,249756#msg-249756 From luky-37 at hotmail.com Wed Apr 30 07:55:41 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 30 Apr 2014 09:55:41 +0200 Subject: Issue from forum: SSL: error:1408F119:SSL routines:SSL3_GET_RECORD:decryption failed or bad record mac In-Reply-To: References: , , Message-ID: Hi, >> The fix is already in OpenBSD [4], Debian and Ubuntu will probably ship the? >> patch soon, also see [5] and [6].? >? > Oh, cool, that's good news that it's upstream then. Getting the patch? > to apply is a piece of cake. I was more worried about what would happen? > for the next libssl update. Hopefully Ubuntu will pick that update up.? > Thanks!? FYI, debian already ships this since April, 17th: https://lists.debian.org/debian-security-announce/2014/msg00083.html Ubuntu not yet, as it seems. Regards, Lukas From contact at jpluscplusm.com Wed Apr 30 08:25:07 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 30 Apr 2014 09:25:07 +0100 Subject: nginx logging with huge vhosts In-Reply-To: References: <20140429175018.GX34696@mdounin.ru> Message-ID: On 30 Apr 2014 05:56, "Joydeep Bakshi" wrote: > > Hello Maxim, > > Presently I have configured separate access & error log for each & every nginx vhost. I wonder if there is other alternative which can log all vhosts into a common access & error log and later split them according to vhost for easy debugging. IIRC there are a variety of projects which do this, with differing degrees of completeness, robustness and complexity. Google should help - I don't have any tool names offhand because it's not the early 2000s any more and things have moved on :-) Logstash, Heka and other centralised logging systems are your friend. None of these, however, are part of nginx. If you need to log to a deterministic, per-vhost filename, you might like to look at using a variable in your access_log declaration. Combined with only specifying it once at the http{} level of your config, this can reduce config complexity at a (slight) runtime cost. Maxim has already provided the links to the documentation describing how to do this. HTH, J PS I just remembered: AWStats. That's one tool that can do log splitting or reporting - I forget which. I wouldn't use it these days though. Better solutions exist. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Apr 30 10:14:45 2014 From: nginx-forum at nginx.us (mex) Date: Wed, 30 Apr 2014 06:14:45 -0400 Subject: nginx #1 on the Top 1000 - list (w3techs) Message-ID: <2fb423e8d7a0589e7abd48458f768e1a.NginxMailingListEnglish@forum.nginx.org> according to w3techs, nginx is now #1 on the top 1000 list of websites and according to some perftests we did on our side, 1.6.0 seems to be 10% faster than 1.4; WELL DONE, nginx-team, and thanx for all the support http://w3techs.com/technologies/cross/web_server/ranking lyrics somewhat unrelated, but i found the dancing-part amusing :D https://www.youtube.com/watch?v=7xO-yEaiFoQ regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,249761,249761#msg-249761 From mdounin at mdounin.ru Wed Apr 30 14:09:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Apr 2014 18:09:23 +0400 Subject: nginx rewrites $request_method on error In-Reply-To: <184105e4333691b715b382cb27860f9f.NginxMailingListEnglish@forum.nginx.org> References: <184105e4333691b715b382cb27860f9f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140430140923.GF34696@mdounin.ru> Hello! On Wed, Apr 30, 2014 at 03:04:11AM -0400, kay wrote: [...] > So it seems that error_page 405 /error.html rewrites $request_method Yes, redirection a request to an error page implies changing the request method to GET (unless it's HEAD). It's required to properly return error page. Much like with URI change, this can be avoided by using a redirect to a named location instead. -- Maxim Dounin http://nginx.org/