From roberto at unbit.it Wed Jan 1 06:18:16 2014 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 1 Jan 2014 07:18:16 +0100 Subject: Hiring a dev: nginx+interchange In-Reply-To: References: Message-ID: <9ebbe61429e25362cd558914f3f8d604.squirrel@manage.unbit.it> > Hello, I use a perl framework called interchange (icdevgroup.org) and > I've been using a perl module called Interchange::Link to interface > interchange to apache: > > https://github.com/interchange/interchange/blob/master/dist/src/mod_perl2/Interchange/Link.pm > > I'd like to switch from apache to nginx and I need to hire someone to > help me interface interchange to nginx. I don't need the interface to > include all of the features from Interchange::Link. > > - Grant > > _ Hi Grant, embedding blocking code in nginx is not allowed (or better: it is terribly wrong as in any other non-blocking engine) You should invest on writing a PSGI adapter so you can use a full application server (like Starman or uWSGI) behind nginx. -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Wed Jan 1 15:54:13 2014 From: nginx-forum at nginx.us (linuxr00lz2013) Date: Wed, 01 Jan 2014 10:54:13 -0500 Subject: How do I disable DNS Caching and DNS Reverse Lookup in Nginx ? In-Reply-To: <20131230231545.GZ95113@mdounin.ru> References: <20131230231545.GZ95113@mdounin.ru> Message-ID: <6b5ef756829d64f63c050cd264857f52.NginxMailingListEnglish@forum.nginx.org> Hello Happy New year and thank you for the reply! I dont think thats the cause, because I tried clearing the cache and it was still stlow! Is there a special directive that I have to use to get it to stop caching? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245904,245945#msg-245945 From m.herrmann at ibumedia.de Wed Jan 1 16:43:19 2014 From: m.herrmann at ibumedia.de (Moritz Herrmann) Date: Wed, 01 Jan 2014 17:43:19 +0100 Subject: Rewriteing awstats to base path Message-ID: <52C445A7.2090307@ibumedia.de> Hi, I like to beautify my urls used for awstats. Currently I have to use the following path for awstats > /cgi-bin/awstats.pl?config=www.example.com that works as expected. For easier handling for our users I like to shorten the url to /www.example.com I tried a simple rewrite like > rewrite ^/www.example.com$ /cgi-bin/awstats.pl?config=www.example.com; but I'll get the following error > 2013/12/31 23:43:20 [error] 23736#0: *1 open() "/srv/awstats/dist/wwwroot/awstats.pl" failed (2: No such file or directory), client: ::ff:123.123.123.123, server: awstats.example.com, request: "GET /awstats.pl?config=www.example.com&framename=mainleft HTTP/1.1", host: "awstats.example.com", referrer: "http://awstats.example.com/www.example.com" here is the full server block > server { > listen [::]:80; > server_name awstats.example.com awstats7.example.com; > > root /srv/awstats/dist/wwwroot; > > auth_basic "Restricted"; > auth_basic_user_file /srv/awstats/.htpasswd; > > location ~ ^/cgi-bin/awstats\.pl { > include fastcgi_params; > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_param SCRIPT_FILENAME /srv/awstats/dist/tools/nginx/awstats-fcgi.php; > fastcgi_param X_SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_param X_SCRIPT_NAME $fastcgi_script_name; > } > > rewrite ^/awstats/(.*)$ /cgi-bin/$1; > rewrite ^/www.example.com$ /cgi-bin/awstats.pl?config=www.example.com; > > location = /robots.txt { return 200 "User-agent: *\nDisallow: /\n"; } > } Desired url http://awstats.example.com/www.example.com Probably I didn't fully understand the rewrite rules. Hopefully you can help! Best regards moe From mdounin at mdounin.ru Thu Jan 2 02:07:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 2 Jan 2014 06:07:01 +0400 Subject: How do I disable DNS Caching and DNS Reverse Lookup in Nginx ? In-Reply-To: <6b5ef756829d64f63c050cd264857f52.NginxMailingListEnglish@forum.nginx.org> References: <20131230231545.GZ95113@mdounin.ru> <6b5ef756829d64f63c050cd264857f52.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140102020701.GB95113@mdounin.ru> Hello! On Wed, Jan 01, 2014 at 10:54:13AM -0500, linuxr00lz2013 wrote: > Hello Happy New year and thank you for the reply! > > I dont think thats the cause, because I tried clearing the cache and it was > still stlow! Is there a special directive that I have to use to get it to > stop caching? Unfortunately, there is no magic directive "do it all right". There is no DNS caching in nginx which survives configuration reload, and there are no reverse DNS lookups in http module at all. Unfortunately, you don't show us real configuration and real logs, so basically nobody here can help with debugging, but general tips are: 1) Make sure you are testing it right. This basically means you'll have to forget about browsers as they are too complex to be usable as testing tools and use telnet or curl for basic tests. And make sure to watch logs while doing tests. 2) Make sure you've configured it right. Make sure to understand what you write in your configuration, make sure to test what you wrote ("nginx -t" is your friend, as well as error log), and avoid stupid mistakes like infinite loops. See above for recommended testing tools. 3) Avoid descriptive terms like "really", "painfully", "awfully" - measure instead. If a request takes 60 milliseconds - it may be either really fast or really slow, depeding on use case. Moreover, exact numbers are usually help a lot with debugging. If something takes 60 seconds - it usually means that there is 60 second timeout somewhere (one of configure upstream servers can't be reached?). Happy New Year and happy debugging! -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jan 2 04:44:24 2014 From: nginx-forum at nginx.us (humank) Date: Wed, 01 Jan 2014 23:44:24 -0500 Subject: How do i get the request body ? Message-ID: Hello guys, I'm developing a nginx module, the intent is to get the request body, then write some response depends on what request body is. I've called the method ngx_http_read_client_request_body (r, ngx_http_myModule_handler); Since this code, i want to get the real request body in ngx_http_myModule_handler() Here are my codes ... void ngx_http_myModule_handler(ngx_http_request_t *r) { ngx_http_finalize_request(r, NGX_DONE); if(!(r->request_body->bufs == NULL)){ ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "request is not empty."); } } the questions is , how can i get the r->request_body->bufs to char * ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245952,245952#msg-245952 From nginx-forum at nginx.us Thu Jan 2 14:29:01 2014 From: nginx-forum at nginx.us (goversation) Date: Thu, 02 Jan 2014 09:29:01 -0500 Subject: nginx rewrite configuration Message-ID: <778b42cccec552ecc399e481296ac27c.NginxMailingListEnglish@forum.nginx.org> hi! i'm newbie so having a hard time! I have a question about rewrite configure in nginx! RULE is http://URL/[option]/http://URL . I mean... for example, if request is http://aaa.net/25X25/http://bbb.net/ccc.jpg , rewrite is /25X25/bbb.net.jpg should i use regular expression? please help me! thank you :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245954,245954#msg-245954 From s.wiese at trabia.net Thu Jan 2 16:59:40 2014 From: s.wiese at trabia.net (Sven Wiese) Date: Thu, 02 Jan 2014 18:59:40 +0200 Subject: Nginx not starting with named pipe (fifo) for access_log Message-ID: <52C59AFC.3050404@trabia.net> Heya, there seems to be a issue with Nginx and named pipes (fifo). Tested nginx versions: - 1.1.19 (Ubuntu 12.04.3 LTS amd64) - 1.4.4 (Ubuntu 12.04.3 LTS amd64 with PPA https://launchpad.net/~nginx/+archive/stable ) - 1.4.4 (CentOS 6.5 amd64 with repo http://nginx.org/packages/centos/$releasever/$basearch/ ) Issue description: As soon as a named pipe is defined as access_log, nginx refuses to start. It just stales during the start and that's it. The only you can do is kill the process. The named pipe has been created with: mkfifo -m 0666 /var/log/test.log I have tested 2 versions of Nginx (1.1.19, 1.4.4) using Ubuntus repository. Different locations and different permissions of the named pipe have been tried, didn't help. Other programs work just fine with the named pipe, only Nginx seems to refuse it. Configuration: --snip-- http { [...] access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; access_log /var/log/test.log; [...] } --snap-- strace output: --snip-- [...] open("/var/log/nginx/access.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 5 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 open("/var/log/nginx/error.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 6 fcntl(6, F_SETFD, FD_CLOEXEC) = 0 open("/var/log/test.log", O_WRONLY|O_CREAT|O_APPEND, 0644 [ CTRL+C ] --snap-- Did anyone else experience such behavior? I tried searching for it but couldn't find anything, only people seeming to successfully use named pipes (eg. in conjunction with syslog-ng). Cheers, Sven -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3872 bytes Desc: S/MIME Cryptographic Signature URL: From appa at perusio.net Thu Jan 2 19:18:47 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 2 Jan 2014 20:18:47 +0100 Subject: How to delete cache based on expires headers? In-Reply-To: <1388377677.36102.YahooMailNeo@web140402.mail.bf1.yahoo.com> References: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> <1388377677.36102.YahooMailNeo@web140402.mail.bf1.yahoo.com> Message-ID: Yes. Nginx will obey the Cache-Control/Expire headers. It won't delete, but it will refresh the files so that the served content is fresh. So it is as if the files were deleted. AFAIK deletion happens more often when the file is not accessed for given time specified through the inactive parameter of the proxy_cache_path/fastcgi_cache_path directives. ----appa On Mon, Dec 30, 2013 at 5:27 AM, Indo Php wrote: > Hi > > Is that means that nginx will put the files based on the upstream expire > headers? After that nginx will delete the cache files? > > > > > On Tuesday, December 24, 2013 10:28 PM, Ant?nio P. P. Almeida < > appa at perusio.net> wrote: > Why you want to do this? nginx can manage expiration/cache-control > headers all by itself. > > As soon as the defined max-age is set it returns a upstream status of > EXPIRED until it fetches a fresh > page from upstream. > > Deleting won't buy you anything in terms of content freshness. > > > > > > ----appa > > > > On Tue, Dec 24, 2013 at 3:57 AM, Indo Php wrote: > > Hello.. > > Can somebody help me on this? > > Thank you before > > > On Thursday, December 19, 2013 11:21 AM, Indo Php > wrote: > Hi > > I'm using proxy_cache to mirror my files with the configuration below > > proxy_cache_path /var/cache/nginx/image levels=1:2 keys_zone=one:10m > inactive=7d max_size=100g; > > Our backend server has the expires header set to 600secs > > Is that posibble for us to also delete the cache files located at /var/cache/nginx/image > depends on the backend expire header? > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 2 20:54:48 2014 From: nginx-forum at nginx.us (theotow) Date: Thu, 02 Jan 2014 15:54:48 -0500 Subject: dynamic rate limiting per ip Message-ID: Hello Folks, i have some setup with multiple server and i offer downloads for the users, in the case my servers bandwidth is overloaded i want the people to be able to start the download but with limited rate so the don't have to wait in some kind of queue till the get there downloadlink. As soon as some slots frees the person highest in the queue download rate should increase to the max. Any Ideas if this is possible with the limit_rate of the http core module and lua? If it would be possible to make 2 zone dicts where the ips of the the slow and fast connections are in. And if someone ratelimit is dropped his ip gets removed from the slow dict and added to the fast dict. https://github.com/chaoslawful/lua-nginx-module#ngxshareddict Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245957,245957#msg-245957 From nginx-forum at nginx.us Thu Jan 2 22:03:36 2014 From: nginx-forum at nginx.us (nginx_developer) Date: Thu, 02 Jan 2014 17:03:36 -0500 Subject: OCSP validation of client certificates Message-ID: <806156b9262dfcabe5614b4363523683.NginxMailingListEnglish@forum.nginx.org> Hi Forum, I see that nGinx supports configuration to perform OCSP validation of server side certificates and staple the validation response to the client. My question is whether nGinx supports OCSP validation of client presented certificates. I seem to hit a dead end with documentation for that question. Would be helpful if someone could answer this. Thanks in advance for your time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245958,245958#msg-245958 From electronixtar at gmail.com Fri Jan 3 00:49:20 2014 From: electronixtar at gmail.com (est) Date: Fri, 3 Jan 2014 08:49:20 +0800 Subject: How would nginx record client IP address under TCP Multipath? Message-ID: Hello, Since iOS7 supports TCP Multipath now, I think more and more devices will start support it. But TCP Multipath allows many client IPs connected to the same server, suppose Nginx in this case, how would access_log record all of the IPs? Just curious question :) -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Fri Jan 3 01:43:10 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 3 Jan 2014 02:43:10 +0100 Subject: How would nginx record client IP address under TCP Multipath? In-Reply-To: References: Message-ID: Hi, > Since iOS7 supports TCP Multipath now, I think more and more devices > will start support it. Not if the servers don't support it. Apple pushed for a specific reason: To avoid having a broken TCP session when the IP address of the handheld changes, which would interrupt Apple's Siri. But TCP multipath is still not supported by linux mainline and I don't see efforts on linux-netdev to include it anytime soon. I understand there is a maintained and uptodate patchset available, but that doesn't mean it will be included in the kernel soon. > But TCP Multipath allows many client IPs connected to the same server, > suppose Nginx in this case, how would access_log record all of the IPs? The application will always see the first IP, which connected to the server, as per: http://lwn.net/Articles/545862/ Regards, Lukas From mdounin at mdounin.ru Fri Jan 3 03:17:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Jan 2014 07:17:27 +0400 Subject: OCSP validation of client certificates In-Reply-To: <806156b9262dfcabe5614b4363523683.NginxMailingListEnglish@forum.nginx.org> References: <806156b9262dfcabe5614b4363523683.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140103031727.GD95113@mdounin.ru> Hello! On Thu, Jan 02, 2014 at 05:03:36PM -0500, nginx_developer wrote: > Hi Forum, > I see that nGinx supports configuration to perform OCSP validation of > server side certificates and staple the validation response to the client. > My question is whether nGinx supports OCSP validation of client presented > certificates. > > I seem to hit a dead end with documentation for that question. Would be > helpful if someone could answer this. No. Only explicitly loaded CRLs are supported, see http://nginx.org/r/ssl_crl. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Jan 3 03:40:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Jan 2014 07:40:34 +0400 Subject: How do i get the request body ? In-Reply-To: References: Message-ID: <20140103034033.GE95113@mdounin.ru> Hello! On Wed, Jan 01, 2014 at 11:44:24PM -0500, humank wrote: > Hello guys, > > I'm developing a nginx module, the intent is to get the request > body, then write some response depends on what request body is. > I've called the method ngx_http_read_client_request_body (r, > ngx_http_myModule_handler); > > Since this code, i want to get the real request body in > ngx_http_myModule_handler() > Here are my codes ... > > void ngx_http_myModule_handler(ngx_http_request_t *r) > { > ngx_http_finalize_request(r, NGX_DONE); > > if(!(r->request_body->bufs == NULL)){ > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "request is not > empty."); > > } > } > > the questions is , how can i get the r->request_body->bufs to char * ? A request body is available as a series of buffers in r->request_body->bufs. To understand more about buffers, try reading Evan Miller's guide as available from here: http://www.evanmiller.org/nginx-modules-guide.html Some example code which uses r->request_body->bufs to access request body contents as available in memory can be found in src/http/ngx_http_variables.c, in the ngx_http_variable_request_body() function. Note though, that depending on a configuration and a request, the request body may not be available in memory at all (that is, it will be in temporary file, and there will be a file buffer in r->request_body->bufs). -- Maxim Dounin http://nginx.org/ From noloader at gmail.com Fri Jan 3 05:18:27 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Fri, 3 Jan 2014 00:18:27 -0500 Subject: -rpath linker option? Message-ID: I'm having trouble with dll hell on Debian and Ubuntu with OpenSSL. Debian and Ubuntu insist on runtime linking with the copy in /usr/lib. Fedora and Red Hat are OK because they don't use OpenSSL by default, so they are not present in /usr/lib. I've tried specifying a rpath in ld options: --with-ld-opt="-rpath=$OPENSSL_LIB_DIR -ldl" That results in: checking for C compiler ... found + using GNU C compiler checking for --with-ld-opt="-rpath=/usr/local/ssl/lib -ldl" ... not found ./auto/configure: error: the invalid value in --with-ld-opt="-rpath=/usr/local/ssl/lib -ldl" The path is valid: $ ls /usr/local/ssl/lib engines libcrypto.so libssl.a libssl.so.1.0.0 libcrypto.a libcrypto.so.1.0.0 libssl.so pkgconfig LD_LIBRARY_PATH and LD_PRELOAD tricks don't work because they are dropped when running as root. Any ideas how to proceed? From electronixtar at gmail.com Fri Jan 3 08:10:08 2014 From: electronixtar at gmail.com (est) Date: Fri, 3 Jan 2014 16:10:08 +0800 Subject: How would nginx record client IP address under TCP Multipath? In-Reply-To: References: Message-ID: That's very helpful info. Thanks! So getsockname() and getpeername() returns the initial subflow, what's the API to get other subflows? Edit: found my answer: https://datatracker.ietf.org/doc/rfc6897/?include_text=1 by using setsockopt() and getsockopt() The functions getpeername() and getsockname() SHOULD also always return the addresses of the first subflow if the socket is used by an MPTCP-aware application, in order to be consistent with MPTCP-unaware applications, and, e.g., also with the Stream Control Transmission Protocol (SCTP). Instead of getpeername() or getsockname(), MPTCP-aware applications can use new API calls, described in Section 5.3, in order to retrieve the full list of address pairs for the subflows in use. On Fri, Jan 3, 2014 at 9:43 AM, Lukas Tribus wrote: > Hi, > > > > Since iOS7 supports TCP Multipath now, I think more and more devices > > will start support it. > > Not if the servers don't support it. > > Apple pushed for a specific reason: > To avoid having a broken TCP session when the IP address of the handheld > changes, which would interrupt Apple's Siri. > > But TCP multipath is still not supported by linux mainline and I don't > see efforts on linux-netdev to include it anytime soon. I understand there > is a maintained and uptodate patchset available, but that doesn't mean > it will be included in the kernel soon. > > > > > But TCP Multipath allows many client IPs connected to the same server, > > suppose Nginx in this case, how would access_log record all of the IPs? > > The application will always see the first IP, which connected to the > server, > as per: > http://lwn.net/Articles/545862/ > > > > > Regards, > > Lukas > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 3 09:35:06 2014 From: nginx-forum at nginx.us (flash008) Date: Fri, 03 Jan 2014 04:35:06 -0500 Subject: SSL handshake fail between nginx and my tomcat with mutual authentication Message-ID: Hi All, I am using Nginx 1.4.4 as reverse proxy for my tomcat server. My problem is: SSL handshake failed between Nginx and tomcat with mutual SSL authentication. I have verified that Client to Nginx with mutual SSL is working. But if my upstream backend is also using https:mutual port, the path will fail with error: [error] 1816#3436: *23 SSL_do_handshake() failed (SSL: error:14094412:SSL routines:SSL3_READ_BYTES:sslv3 alert bad certificate:SSL alert number 42) while SSL handshaking to upstream, client: xx.xx.xx.xx, server: xx.xxx.xxx.xxx, request: "GET / HTTP/1.1", upstream: "https://xx.xx.xx.xx:8082/", host: "xx.xx.xx.xx:8002" My upstream server https://xx.xx.xx.xx:8082 is using mutual SSL and working perfectly without Nginx. the Nginx host https://xx.xx.xx.xx:8002 is using mutual SSL and also working perfectly without the upstream mutual ssl or with only http port. The problem is: when both Nginx and upstream require mutual SSL, and I would like to pass the client certificate to Nginx then to my upstream server, the SSL handshake error occurs. I have tried to add client cert in headers, but no luck. Here is part of my nginx config #### server { listen xx.xx.xx.xx:8002; server_name xx.xx.xx.xx; ssl on; ssl_certificate C:/nginx-1.4.4/cert/MyServer.crt; ssl_certificate_key C:/nginx-1.4.4/cert/MyServer.key; ssl_client_certificate C:/nginx-1.4.4/cert/MyCA.pem; ssl_trusted_certificate C:/nginx-1.4.4/cert/MyCA.pem; ssl_prefer_server_ciphers on; ssl_verify_client on; ssl_verify_depth 3; ssl_protocols SSLv2 SSLv3 TLSv1; access_log C:/nginx-1.4.4/logs/access_8002.log; error_log C:/nginx-1.4.4/logs/error_8002.log debug; root html; index index.html index.htm; location / { proxy_pass https://10.128.103.47:8082/; proxy_redirect default; proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Client-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Ssl on; proxy_set_header X-Client-Verify $ssl_client_verify; proxy_set_header X-SSL-Client-Cert $ssl_client_cert; proxy_set_header X-SSL-Client-Serial $ssl_client_serial; proxy_set_header X-SSL-Client-Verify $ssl_client_verify; proxy_set_header X-SSL-Client-S-DN $ssl_client_s_dn; } } Is this usage supported by Nginx? I would be very grateful if someone can point me some clues or suggestions. Thanks and Best Regards, Flash008 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245971,245971#msg-245971 From mdounin at mdounin.ru Fri Jan 3 13:37:54 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 3 Jan 2014 17:37:54 +0400 Subject: -rpath linker option? In-Reply-To: References: Message-ID: <20140103133754.GJ95113@mdounin.ru> Hello! On Fri, Jan 03, 2014 at 12:18:27AM -0500, Jeffrey Walton wrote: > I'm having trouble with dll hell on Debian and Ubuntu with OpenSSL. > Debian and Ubuntu insist on runtime linking with the copy in /usr/lib. > Fedora and Red Hat are OK because they don't use OpenSSL by default, > so they are not present in /usr/lib. > > I've tried specifying a rpath in ld options: > > --with-ld-opt="-rpath=$OPENSSL_LIB_DIR -ldl" > > That results in: > > checking for C compiler ... found > + using GNU C compiler > checking for --with-ld-opt="-rpath=/usr/local/ssl/lib -ldl" ... not found > ./auto/configure: error: the invalid value in > --with-ld-opt="-rpath=/usr/local/ssl/lib -ldl" Try looking into objs/autoconf.err. Most likely, your cc want it to be spelled like "-Wl,-rpath=...". -- Maxim Dounin http://nginx.org/ From richard at kearsley.me Fri Jan 3 14:46:09 2014 From: richard at kearsley.me (Richard Kearsley) Date: Fri, 03 Jan 2014 14:46:09 +0000 Subject: nginx ssl handshake vs apache Message-ID: <52C6CD31.3030908@kearsley.me> Hi I was watching this video by fastly ceo http://youtu.be/zrSvoQz1GOs?t=24m44s he talks about the nginx ssl handshake versus apache and comes to the conclusion that apache was more efficient at mass handshakes due to nginx blocking while it calls back to openssl I was hoping to get other people's opinion on this and find out if what he says is accurate or not Many thanks Richard From renenglish at gmail.com Fri Jan 3 15:03:26 2014 From: renenglish at gmail.com (=?GB2312?B?yM7Twsir?=) Date: Fri, 3 Jan 2014 23:03:26 +0800 Subject: Does it possible to submit duplicated request with the proxy_next_upstream on Message-ID: <9B89B2DD-EB2A-4126-AB7D-9E86972866A0@gmail.com> Hi all: I am wondering if I set: proxy_next_upstream error timeout; Fox example , if the requested service is a counter , I issue the request use the interface http://example.com/incr . The request is failed on my first host A, then it is passed to the second host B , is the counter likely be added twice ? thanks . From nginx-forum at nginx.us Sat Jan 4 01:22:28 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 03 Jan 2014 20:22:28 -0500 Subject: mainline does not include gzip module? Message-ID: i just replaced a stable install with the mainline version (1.5.8) and noticed that the outputted files are not being gzipped. i ran nginx -V and do not see any arguments that enable gzip. is there a reason why the stable version included gzip and this mainline does not? do i need to manually build nginx to include the gzip module? if so, does that mean i need to rebuild nginx every time there is a new update? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245997,245997#msg-245997 From vbart at nginx.com Sat Jan 4 01:34:02 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 04 Jan 2014 05:34:02 +0400 Subject: mainline does not include gzip module? In-Reply-To: References: Message-ID: <2131154.zFoeNsukc8@vbart-laptop> On Friday 03 January 2014 20:22:28 ura wrote: > i just replaced a stable install with the mainline version (1.5.8) and > noticed that the outputted files are not being gzipped. > i ran nginx -V and do not see any arguments that enable gzip. > is there a reason why the stable version included gzip and this mainline > does not? The gzip module is compiled by default, unless explicitly disabled using the --without-http_gzip_module argument. > do i need to manually build nginx to include the gzip module? No, you don't. It is included in both packages stable and mainline. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat Jan 4 01:51:31 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 03 Jan 2014 20:51:31 -0500 Subject: mainline does not include gzip module? In-Reply-To: <2131154.zFoeNsukc8@vbart-laptop> References: <2131154.zFoeNsukc8@vbart-laptop> Message-ID: thanks for responding. :) so... has a change been made in the way i would activate the gzip process between the stable and mainline versions? in nginx.conf? this is the list of options i was successfully using in stable (built through trial and error): gzip on; gzip_http_version 1.0; gzip_comp_level 6; gzip_proxied any; gzip_min_length 100; gzip_buffers 16 8k; gzip_types text/plain text/css application/x-javascript application/json text/xml application/xml application/xml+rss text/javascript; gzip_disable "msie6"; gzip_vary on; any other thoughts on why gzip would appear to not be functioning for me here? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245997,245999#msg-245999 From nginx-forum at nginx.us Sat Jan 4 01:53:32 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 03 Jan 2014 20:53:32 -0500 Subject: mainline does not include gzip module? In-Reply-To: References: <2131154.zFoeNsukc8@vbart-laptop> Message-ID: ah, i found the answer.. i needed to change the javascript mimetype to 'application/javascript' Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245997,246000#msg-246000 From nginx-forum at nginx.us Sat Jan 4 03:42:52 2014 From: nginx-forum at nginx.us (justin) Date: Fri, 03 Jan 2014 22:42:52 -0500 Subject: Very slow dns lookup using proxy_pass Message-ID: <5fd685a4bea5a8565474e049ed32969d.NginxMailingListEnglish@forum.nginx.org> I am seeing very slow DNS lookup times ( > 2 seconds ) using proxy_pass, even though dig response times on the server are quick. Here is the nginx configuration block: location ~ ^/v1/(?.*) { resolver 8.8.4.4 4.4.4.4 valid=300s; resolver_timeout 10s; proxy_pass https://$remote_user.mydomain.com/api/; proxy_hide_header Vary; proxy_set_header X-Real-IP $remote_addr; proxy_connect_timeout 10s; proxy_read_timeout 60s; proxy_ssl_session_reuse on; } I am using Google Public DNS. Here is a result from: dig demo.mydomain.com ; <<>> DiG 9.8.2rc1-RedHat-9.8.2-0.17.rc1.el6_4.6 <<>> demo.mydomain.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 37997 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;demo.mydomain.com. IN A ;; ANSWER SECTION: demo.mydomain.com. 299 IN A X.X.X.X ;; Query time: 187 msec ;; SERVER: 8.8.4.4#53(8.8.4.4) ;; WHEN: Fri Jan 3 19:40:32 2014 ;; MSG SIZE rcvd: 50 Any ideas why this is so slow, and solutions? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246001,246001#msg-246001 From contact at jpluscplusm.com Sat Jan 4 03:52:59 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Sat, 4 Jan 2014 03:52:59 +0000 Subject: Very slow dns lookup using proxy_pass In-Reply-To: <5fd685a4bea5a8565474e049ed32969d.NginxMailingListEnglish@forum.nginx.org> References: <5fd685a4bea5a8565474e049ed32969d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 4 January 2014 03:42, justin wrote: > I am seeing very slow DNS lookup times ( > 2 seconds ) using proxy_pass, > even though dig response times on the server are quick [snip] > Any ideas why this is so slow, and solutions? Please demonstrate a slow request, and show the data that leads you to believe that DNS lookups from nginx are the problem. Jonathan From nginx-forum at nginx.us Sat Jan 4 04:11:02 2014 From: nginx-forum at nginx.us (justin) Date: Fri, 03 Jan 2014 23:11:02 -0500 Subject: Very slow dns lookup using proxy_pass In-Reply-To: References: Message-ID: <3c9913711c91dce5ec8fada3f5a037de.NginxMailingListEnglish@forum.nginx.org> Hi Jonathan, Using time is the only way I know how to demonstrate this: FIRST TIME TOOK: 5.8 seconds ? ~ time curl -i -u demo: https://api.mydomain.com/v1/ HTTP/1.1 200 OK Server: nginx Date: Sat, 04 Jan 2014 04:07:50 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Strict-Transport-Security: max-age=31556926 Cache-Control: no-cache, no-store Access-Control-Max-Age: 300 Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS {"version":"v1"} curl -i -u demo: https://api.mydomain.com/v1/ 0.54s user 0.01s system 9% cpu 5.857 total EXECUTED AGAIN, IMMEDIATELY AFTER. TOOK: 197ms ? ~ time curl -i -u demo: https://api.mydomain.com/v1/ HTTP/1.1 200 OK Server: nginx Date: Sat, 04 Jan 2014 04:07:54 GMT Content-Type: application/json Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding Strict-Transport-Security: max-age=31556926 Cache-Control: no-cache, no-store Access-Control-Max-Age: 300 Access-Control-Allow-Credentials: true Access-Control-Allow-Origin: * Access-Control-Allow-Methods: GET, POST, PUT, DELETE, OPTIONS {"version":"v1"} curl -i -u demo: https://api.mydomain.com/v1/ 0.05s user 0.01s system 27% cpu 0.197 total Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246001,246003#msg-246003 From nginx-forum at nginx.us Sat Jan 4 06:18:12 2014 From: nginx-forum at nginx.us (ComfortVPS) Date: Sat, 04 Jan 2014 01:18:12 -0500 Subject: CentOS Nginx Installer(include PHP-FTP, MYSQL) add website no need restart Nginx Message-ID: <02e742f31424674afd3f2dc62b2d567f.NginxMailingListEnglish@forum.nginx.org> Some of our clients are not good at SSH command, they just want an easy way to run their websites. Most of them are less then 1GB memory VPS and don't want install control panel. For this purpose, we wrote a CentOS Nginx Installer script for "Nginx+PHP+MySql+phpMyAdmin" What's you need to do is: Copy and post/run below single command line via SSH root login. wait 5-15 minutes(depending on the software download speed from your server), Everything is Done! ############ Automatically install command line ############ wget -O /tmp/npmp.sh https://raw.github.com/ComfortVPS/Nginx-PHP-MySql-phpMyAdmin/master/install-nginx-php-mysql.sh; sh /tmp/npmp.sh; ############ Installation Requirement ############ CentOS 5.x/6.x 32bit or 64bit ( We recommend you Reload OS before run the installer ) Guarantee Memory >= 128MB Free Disk space >=2GB ############ Fetures ############ You can add multiple websites directly via SFTP, no need to login SSH, no need to restart Nginx You can easily custom config each website domain's Nginx config file if you need Installed the newest stable version of Nginx, PHP-FPM, MySql by YUM phpMyAdmin4.x installed and ready for use. Just login and manage mysql database as you want Work perfectly for low memory VPS with ram >= 128MB Many tutorials for how to use Get your password after everything installed Your SSH console screen will show something like below after successfully installed, Please record your password. The unique password was random generated by openssl, no need to change. ############ Information you will get after installation (below is an example) ############ ====== Nginx + PHP-FPM + MYSQL Successfully installed ====== MySql root password is cft.KL7fvW2g ====== SFTP Username is myweb ====== SFTP Password is cft.KL7fvW2g ====== Website document root is /www/yourdomain ====== Add websites tutorials: http://goo.gl/sdDF9 ====== Now you can visit http://your-ip-address/ ====== Eg. http://50.3.62.173/ ====== phpMyAdmin: http://50.3.62.173/phpMyAdmin4U/ ====== More tutorials: http://goo.gl/tNFb0 ############ How to add multiple websites via SFTP ? ############ Step 1, Create a domain name directory under /www via SFTP, eg, yourdomain.com, subdomain.abc.com, domain-name.net Step 2, Everything is Done, Config Nginx virtual host is finish, upload your php/html files to that directory then test it. ############ More tutorials ############ http://goo.gl/tNFb0 We will continue to write more tutorials for how to use it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246005,246005#msg-246005 From nginx at 2xlp.com Sat Jan 4 20:36:33 2014 From: nginx at 2xlp.com (Jonathan Vanasco) Date: Sat, 4 Jan 2014 15:36:33 -0500 Subject: issue with `default_type` & `type` on 1.5.7 In-Reply-To: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> References: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> Message-ID: <663B722A-E6F3-40CD-B37D-4CD53FCD4956@2xlp.com> I recently encountered an issue with a 1.5.7 branch on OSX. i did not check 1.5.8 The following code will set ALL css/js files as the default_type include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; The following code works as intended default_type application/octet-stream; include /usr/local/nginx/conf/mime.types; I haven't had time to test on other versions. This could be the intended behavior, but the docs don't suggest that. usually a default_type only applies when the real type can't be found. From noloader at gmail.com Sun Jan 5 07:59:52 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 5 Jan 2014 02:59:52 -0500 Subject: agentzh's encrypted session module Message-ID: I've been studying agentzh's encrypted session module from https://github.com/agentzh/encrypted-session-nginx-module/tree/master/src. There are essentially two methods: one to encrypt values, and a second to decrypt modules. The functions to do so are ngx_http_set_encode_encrypted_session and ngx_http_set_decode_encrypted_session. The functions call ngx_http_encrypted_session_aes_mac_encrypt and ngx_http_encrypted_session_aes_mac_decrypt respectively. The problem I am having is: I cannot tell how this is plumbed into nginx framework such that a value is encrypted going one way, and decrypted going another. From the module, I clearly see the command: Can anyone shed some light on the magic I am missing? Thanks in advance. From noloader at gmail.com Sun Jan 5 08:14:48 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Sun, 5 Jan 2014 03:14:48 -0500 Subject: OT: Crypto Hardening Guide (nginx has an entry) Message-ID: For administrators interested in SSL/TLS protocols and ciphers settings, the following may be useful. It also looks like the [possibly influenced] US stuff was removed. https://bettercrypto.org/static/applied-crypto-hardening.pdf From nick at thenile.com.au Mon Jan 6 03:56:23 2014 From: nick at thenile.com.au (Nick Jenkin) Date: Mon, 6 Jan 2014 14:56:23 +1100 Subject: Centos 6.5 and ECDH ciphers in nginx.org Centos repo Message-ID: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> Hi In Centos 6.5 (and RHEL 6.5) the ECDH ciphers were enabled. There appears to be an issue with the nginx.org 1.5.8 Centos binaries still not having support for ECDHE despite having updated openssl 1.01e with elliptic curves. If I compile from source, ECDH works fine. Is there something wrong with the centos binaries? Ciphers on Centos 6.5: [nick at dev9145 conf.d]$ openssl ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-DSS-AES256-GCM-SHA384:DHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA256:DHE-RSA-AES256-SHA:DHE-DSS-AES256-SHA:DHE-RSA-CAMELLIA256-SHA:DHE-DSS-CAMELLIA256-SHA:ECDH-RSA-AES256-GCM-SHA384:ECDH-ECDSA-AES256-GCM-SHA384:ECDH-RSA-AES256-SHA384:ECDH-ECDSA-AES256-SHA384:ECDH-RSA-AES256-SHA:ECDH-ECDSA-AES256-SHA:AES256-GCM-SHA384:AES256-SHA256:AES256-SHA:CAMELLIA256-SHA:PSK-AES256-CBC-SHA:ECDHE-RSA-DES-CBC3-SHA:ECDHE-ECDSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:EDH-DSS-DES-CBC3-SHA:ECDH-RSA-DES-CBC3-SHA:ECDH-ECDSA-DES-CBC3-SHA:DES-CBC3-SHA:PSK-3DES-EDE-CBC-SHA:KRB5-DES-CBC3-SHA:KRB5-DES-CBC3-MD5:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:DHE-DSS-AES128-GCM-SHA256:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES128-SHA256:DHE-DSS-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA:DHE-RSA-SEED-SHA:DHE-DSS-SEED-SHA:DHE-RSA-CAMELLIA128-SHA:DHE-DSS-CAMELLIA128-SHA:ECDH-RSA-AES128-GCM-SHA256:ECDH-ECDSA-AES128-GCM-SHA256:ECDH-RSA-AES128-SHA256:ECDH-ECDSA-AES128-SHA256:ECDH-RSA-AES128-SHA:ECDH-ECDSA-AES128-SHA:AES128-GCM-SHA256:AES128-SHA256:AES128-SHA:SEED-SHA:CAMELLIA128-SHA:IDEA-CBC-SHA:PSK-AES128-CBC-SHA:KRB5-IDEA-CBC-SHA:KRB5-IDEA-CBC-MD5:ECDHE-RSA-RC4-SHA:ECDHE-ECDSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDH-ECDSA-RC4-SHA:RC4-SHA:RC4-MD5:PSK-RC4-SHA:KRB5-RC4-SHA:KRB5-RC4-MD5:EDH-RSA-DES-CBC-SHA:EDH-DSS-DES-CBC-SHA:DES-CBC-SHA:KRB5-DES-CBC-SHA:KRB5-DES-CBC-MD5:EXP-EDH-RSA-DES-CBC-SHA:EXP-EDH-DSS-DES-CBC-SHA:EXP-DES-CBC-SHA:EXP-RC2-CBC-MD5:EXP-KRB5-RC2-CBC-SHA:EXP-KRB5-DES-CBC-SHA:EXP-KRB5-RC2-CBC-MD5:EXP-KRB5-DES-CBC-MD5:EXP-RC4-MD5:EXP-KRB5-RC4-SHA:EXP-KRB5-RC4-MD5 ECDHE test: openssl s_client -tls1_2 -cipher ECDH -connect 127.0.0.1:443 CONNECTED(00000003) 139798957754184:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1256:SSL alert number 40 139798957754184:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596: --- no peer certificate available --- No client certificate CA names sent --- SSL handshake has read 7 bytes and written 0 bytes --- New, (NONE), Cipher is (NONE) Secure Renegotiation IS NOT supported Compression: NONE Expansion: NONE ? Thanks -Nick From iptablez at yahoo.com Mon Jan 6 04:17:46 2014 From: iptablez at yahoo.com (Indo Php) Date: Sun, 5 Jan 2014 20:17:46 -0800 (PST) Subject: How to delete cache based on expires headers? In-Reply-To: References: <1387426860.9588.YahooMailNeo@web142305.mail.bf1.yahoo.com> <1387853867.39367.YahooMailNeo@web140403.mail.bf1.yahoo.com> <1388377677.36102.YahooMailNeo@web140402.mail.bf1.yahoo.com> Message-ID: <1388981866.74009.YahooMailNeo@web142305.mail.bf1.yahoo.com> Thanks that makes clear! On Friday, January 3, 2014 2:19 AM, Ant?nio P. P. Almeida wrote: Yes.? Nginx will obey the Cache-Control/Expire headers. It won't delete, but it will refresh the files so that the served content is fresh. So it is as if the files were deleted. AFAIK deletion happens more often when the file is not accessed for given time specified through the inactive parameter of the proxy_cache_path/fastcgi_cache_path directives. ? ----appa On Mon, Dec 30, 2013 at 5:27 AM, Indo Php wrote: Hi > > >Is that means that nginx will put the files based on the upstream expire headers? After that nginx will delete the cache files? > > > > > > > >On Tuesday, December 24, 2013 10:28 PM, Ant?nio P. P. Almeida wrote: > >Why you want to do this? nginx can manage expiration/cache-control headers all by itself. > > >As soon as the defined max-age is set it returns a upstream status of EXPIRED until it fetches a fresh >page from upstream. > > >Deleting won't buy you anything in terms of content freshness. > > > > > > > > > > >----appa > > > > >On Tue, Dec 24, 2013 at 3:57 AM, Indo Php wrote: > >Hello.. >> >> >>Can somebody help me on this? >> >> >>Thank you before >> >> >> >>On Thursday, December 19, 2013 11:21 AM, Indo Php wrote: >> >>Hi >> >> >>I'm using proxy_cache to mirror my files with the configuration below >> >> >>proxy_cache_path? /var/cache/nginx/image levels=1:2 keys_zone=one:10m inactive=7d ? ? max_size=100g; >> >> >>Our backend server has the expires header set to 600secs >> >> >>Is that posibble for us to also delete the cache files located at?/var/cache/nginx/image depends on the backend expire header? >> >> >>_______________________________________________ >>nginx mailing list >>nginx at nginx.org >>http://mailman.nginx.org/mailman/listinfo/nginx >> >> >>_______________________________________________ >>nginx mailing list >>nginx at nginx.org >>http://mailman.nginx.org/mailman/listinfo/nginx >> > > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 6 04:55:12 2014 From: nginx-forum at nginx.us (hoidulich) Date: Sun, 05 Jan 2014 23:55:12 -0500 Subject: Problem trying to rewrite a URL Message-ID: Hi all, I have a problem trying to rewrite a URL. It should be pretty straightforward but it has been taking me hours searching. Google indexed urls containing ";", which gets escaped to %3b: http://hoidulich.com/index.php?action=tagged%3bid=254715%3btag=vietholiday (the original url that works is http://hoidulich.com/index.php?action=tagged;id=254715;tag=vietholiday) How can I fix it? Thank you very much. Cao Tri. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246052,246052#msg-246052 From noloader at gmail.com Mon Jan 6 10:07:09 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 6 Jan 2014 05:07:09 -0500 Subject: Centos 6.5 and ECDH ciphers in nginx.org Centos repo In-Reply-To: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> References: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> Message-ID: On Sun, Jan 5, 2014 at 10:56 PM, Nick Jenkin wrote: > Hi > > In Centos 6.5 (and RHEL 6.5) the ECDH ciphers were enabled. There appears to be an issue with the nginx.org 1.5.8 Centos binaries still not having support for ECDHE despite having updated openssl 1.01e with elliptic curves. > > If I compile from source, ECDH works fine. Is there something wrong with the centos binaries? > http://unix.stackexchange.com/questions/84283/how-can-i-get-tlsv1-2-support-in-apache-on-rhel6-centos-sl6 Though the question is about Apache, it specifically calls out nginx as needing a recompile on the platform after updating from OpenSSL 1.0.0 to OpenSSL 1.0.1 due to static linking. Jeff From nick at thenile.com.au Mon Jan 6 10:10:43 2014 From: nick at thenile.com.au (Nick Jenkin) Date: Mon, 6 Jan 2014 21:10:43 +1100 Subject: Centos 6.5 and ECDH ciphers in nginx.org Centos repo In-Reply-To: References: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> Message-ID: RHEL used 1.0.0 in 6.4, however in 6.5 it was updated to OpenSSL 1.0.1e-fips 11 Feb 2013 See: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/6.5_Release_Notes/ Like I said, if I compile nginx myself it ECDH works fine. It?s the nginx.org binaries that do not work. So it would appear the nginx.org binaries are statically compiled against the older version, so I guess the question is when will the nginx.org builds be built on 6.5? -Nick On 6 Jan 2014, at 9:07 pm, Jeffrey Walton wrote: > On Sun, Jan 5, 2014 at 10:56 PM, Nick Jenkin wrote: >> Hi >> >> In Centos 6.5 (and RHEL 6.5) the ECDH ciphers were enabled. There appears to be an issue with the nginx.org 1.5.8 Centos binaries still not having support for ECDHE despite having updated openssl 1.01e with elliptic curves. >> >> If I compile from source, ECDH works fine. Is there something wrong with the centos binaries? >> > http://unix.stackexchange.com/questions/84283/how-can-i-get-tlsv1-2-support-in-apache-on-rhel6-centos-sl6 > > Though the question is about Apache, it specifically calls out nginx > as needing a recompile on the platform after updating from OpenSSL > 1.0.0 to OpenSSL 1.0.1 due to static linking. > > Jeff > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From noloader at gmail.com Mon Jan 6 10:21:41 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 6 Jan 2014 05:21:41 -0500 Subject: Centos 6.5 and ECDH ciphers in nginx.org Centos repo In-Reply-To: References: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> Message-ID: On Mon, Jan 6, 2014 at 5:10 AM, Nick Jenkin wrote: > RHEL used 1.0.0 in 6.4, however in 6.5 it was updated to OpenSSL 1.0.1e-fips 11 Feb 2013 > See: https://access.redhat.com/site/documentation/en-US/Red_Hat_Enterprise_Linux/6/html-single/6.5_Release_Notes/ > > Like I said, if I compile nginx myself it ECDH works fine. It?s the nginx.org binaries that do not work. So it would appear the nginx.org binaries are statically compiled against the older version... That's easy enought to check. Run ldd on it an look for an OpenSSL dependency. If SSL/TLS is eanbled and the dependency is missing, then nginx was statically linked against OpenSSL. Below, nginx was built with a dependency on the shared object. $ ldd objs/nginx linux-vdso.so.1 => (0x00007fff85f96000) libdl.so.2 => /lib/x86_64-linux-gnu/libdl.so.2 (0x00007f9f0345b000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007f9f0323f000) libcrypt.so.1 => /lib/x86_64-linux-gnu/libcrypt.so.1 (0x00007f9f03007000) libpcre.so.3 => /lib/x86_64-linux-gnu/libpcre.so.3 (0x00007f9f02dca000) libssl.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (0x00007f9f02b6a000) libcrypto.so.1.0.0 => /usr/lib/x86_64-linux-gnu/libcrypto.so.1.0.0 (0x00007f9f02785000) ... > so I guess the question is when will the nginx.org builds be built on 6.5? Sorry, I can't help. I believe that's a question for the Red Hat or CentOS folks. Jeff > On 6 Jan 2014, at 9:07 pm, Jeffrey Walton wrote: > >> On Sun, Jan 5, 2014 at 10:56 PM, Nick Jenkin wrote: >>> Hi >>> >>> In Centos 6.5 (and RHEL 6.5) the ECDH ciphers were enabled. There appears to be an issue with the nginx.org 1.5.8 Centos binaries still not having support for ECDHE despite having updated openssl 1.01e with elliptic curves. >>> >>> If I compile from source, ECDH works fine. Is there something wrong with the centos binaries? >>> >> http://unix.stackexchange.com/questions/84283/how-can-i-get-tlsv1-2-support-in-apache-on-rhel6-centos-sl6 >> >> Though the question is about Apache, it specifically calls out nginx >> as needing a recompile on the platform after updating from OpenSSL >> 1.0.0 to OpenSSL 1.0.1 due to static linking. >> From mdounin at mdounin.ru Mon Jan 6 12:54:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 6 Jan 2014 16:54:57 +0400 Subject: Centos 6.5 and ECDH ciphers in nginx.org Centos repo In-Reply-To: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> References: <4828002A-2339-4CD2-A0F1-D44F7A51FBF9@thenile.com.au> Message-ID: <20140106125457.GR95113@mdounin.ru> Hello! On Mon, Jan 06, 2014 at 02:56:23PM +1100, Nick Jenkin wrote: > Hi > > In Centos 6.5 (and RHEL 6.5) the ECDH ciphers were enabled. > There appears to be an issue with the nginx.org 1.5.8 Centos > binaries still not having support for ECDHE despite having > updated openssl 1.01e with elliptic curves. > > If I compile from source, ECDH works fine. Is there something > wrong with the centos binaries? > > Ciphers on Centos 6.5: [...] This is expected. Builds are done for CentOS 6, not just CentOS 6.5, so they are done with OpenSSL as available in previous versions to ensure compatibility with previous versions of CentOS 6. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jan 6 16:07:36 2014 From: nginx-forum at nginx.us (theotow) Date: Mon, 06 Jan 2014 11:07:36 -0500 Subject: dynamic rate limiting per ip In-Reply-To: References: Message-ID: <3618e65a4e5808716b4ab89fa4908147.NginxMailingListEnglish@forum.nginx.org> nobody an idea? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245957,246064#msg-246064 From nginx-forum at nginx.us Mon Jan 6 17:35:46 2014 From: nginx-forum at nginx.us (linuxr00lz2013) Date: Mon, 06 Jan 2014 12:35:46 -0500 Subject: How do I disable DNS Caching and DNS Reverse Lookup in Nginx ? In-Reply-To: <20140102020701.GB95113@mdounin.ru> References: <20140102020701.GB95113@mdounin.ru> Message-ID: <959b925f65dbdba2780e44913888a117.NginxMailingListEnglish@forum.nginx.org> Hello thank you for your reply! 1) I have shown you the real configuration and logs. All I changed was the FQDN's because I dont know if I am allowed by my company to post them online. 2) Which tests do you recommend I run using telnet and curl? I am not too familiar with using curl so any guidance will be greatly appreciated! Thanks! Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Jan 01, 2014 at 10:54:13AM -0500, linuxr00lz2013 wrote: > > > Hello Happy New year and thank you for the reply! > > > > I dont think thats the cause, because I tried clearing the cache and > it was > > still stlow! Is there a special directive that I have to use to get > it to > > stop caching? > > Unfortunately, there is no magic directive "do it all right". > There is no DNS caching in nginx which survives configuration > reload, and there are no reverse DNS lookups in http module at > all. > > Unfortunately, you don't show us real configuration and real logs, > so basically nobody here can help with debugging, but general tips > are: > > 1) Make sure you are testing it right. This basically means > you'll have to forget about browsers as they are too complex to be > usable as testing tools and use telnet or curl for basic tests. > And make sure to watch logs while doing tests. > > 2) Make sure you've configured it right. Make sure to understand > what you write in your configuration, make sure to test what you > wrote ("nginx -t" is your friend, as well as error log), and avoid > stupid mistakes like infinite loops. See above for recommended > testing tools. > > 3) Avoid descriptive terms like "really", "painfully", "awfully" - > measure instead. If a request takes 60 milliseconds - it may be > either really fast or really slow, depeding on use case. > Moreover, exact numbers are usually help a lot with debugging. If > something takes 60 seconds - it usually means that there is 60 > second timeout somewhere (one of configure upstream servers can't > be reached?). > > Happy New Year and happy debugging! > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245904,246065#msg-246065 From agentzh at gmail.com Mon Jan 6 19:33:23 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 6 Jan 2014 11:33:23 -0800 Subject: dynamic rate limiting per ip In-Reply-To: References: Message-ID: Hello! On Thu, Jan 2, 2014 at 12:54 PM, theotow wrote: > > Any Ideas if this is possible with the limit_rate of the http core module > and lua? > You can use ngx_lua alone to do this. > If it would be possible to make 2 zone dicts where the ips of the the slow > and fast connections are in. And if someone ratelimit is dropped his ip gets > removed from the slow dict and added to the fast dict. > > https://github.com/chaoslawful/lua-nginx-module#ngxshareddict > Yes, you can surely do that. You can use ngx.sleep() to hold back the exceeding clients without blocking other requests served by the same nginx worker. Regards, -agentzh From agentzh at gmail.com Mon Jan 6 19:44:43 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 6 Jan 2014 11:44:43 -0800 Subject: agentzh's encrypted session module In-Reply-To: References: Message-ID: Hello! On Sat, Jan 4, 2014 at 11:59 PM, Jeffrey Walton wrote: > I've been studying agentzh's encrypted session module from > https://github.com/agentzh/encrypted-session-nginx-module/tree/master/src. > Thank you for checking it out! :) > > The problem I am having is: I cannot tell how this is plumbed into > nginx framework such that a value is encrypted going one way, and > decrypted going another. From the module, I clearly see the command: > The callbacks are injected into the standard ngx_rewrite module's command list by means of the ndk_set_var submodule in the ngx_devel_kit (NDK) module: https://github.com/simpl/ngx_devel_kit The entry point of NDK called ngx_encrypted_session is ndk_set_var_value. You can trace from there :) The actual request-time caller of these configuration directives is the standard ngx_rewrite module at the "rewrite" running phase. Best regards, -agentzh From nginx-forum at nginx.us Mon Jan 6 20:34:39 2014 From: nginx-forum at nginx.us (justink101) Date: Mon, 06 Jan 2014 15:34:39 -0500 Subject: Very slow dns lookup using proxy_pass In-Reply-To: <5fd685a4bea5a8565474e049ed32969d.NginxMailingListEnglish@forum.nginx.org> References: <5fd685a4bea5a8565474e049ed32969d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6bf4973420f767174bd74e1a38cf8742.NginxMailingListEnglish@forum.nginx.org> Anybody have any further insight on this? Consistently slow DNS lookups from nginx, even though dig shows fast query times. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246001,246070#msg-246070 From noloader at gmail.com Mon Jan 6 20:40:05 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Mon, 6 Jan 2014 15:40:05 -0500 Subject: OT: OpenSSL 1.0.1f Message-ID: OpenSSL 1.0.1f was released today. It might be a good time to rebuild all the versions of nginx using static versions of OpenSSL. There are three CVE remediations included in the release: CVE-2013-4353, CVE-2013-6449, CVE-2013-6450. http://www.openssl.org/news/openssl-1.0.1-notes.html. It does not look like 1.0.1f changed the default behavior of ENGINE_rdrand (coderman's been following it). 1.0.1f added hostname and email verification routines so programs no longer have to do it themselves. There's also an Apple SecureTransport bug workaround. Apple's SecrureTransport does not properly negotiate ECDHE-ECDSA cipher suites. It affects Mac OS X and could affect iOS. It might be prudent to add SSL_OP_SAFARI_ECDHE_ECDSA_BUG by default. http://www.mail-archive.com/openssl-dev at openssl.org/msg32629.html. From rob.stradling at comodo.com Mon Jan 6 21:02:36 2014 From: rob.stradling at comodo.com (Rob Stradling) Date: Mon, 06 Jan 2014 21:02:36 +0000 Subject: OT: OpenSSL 1.0.1f In-Reply-To: References: Message-ID: <52CB19EC.8040201@comodo.com> On 06/01/14 20:40, Jeffrey Walton wrote: > There's also an Apple SecureTransport bug workaround. Apple's > SecrureTransport does not properly negotiate ECDHE-ECDSA cipher > suites. It affects Mac OS X and could affect iOS. It might be prudent > to add SSL_OP_SAFARI_ECDHE_ECDSA_BUG by default. > http://www.mail-archive.com/openssl-dev at openssl.org/msg32629.html. Nginx doesn't yet support multiple server certs per site (e.g. 1 RSA cert and 1 ECC cert), so SSL_OP_SAFARI_ECDHE_ECDSA_BUG isn't yet useful. (I was working on a patch for multiple server certs a few months ago; I hope to find time to complete this very soon). -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From contact at jpluscplusm.com Mon Jan 6 21:06:46 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 6 Jan 2014 21:06:46 +0000 Subject: Very slow dns lookup using proxy_pass In-Reply-To: <6bf4973420f767174bd74e1a38cf8742.NginxMailingListEnglish@forum.nginx.org> References: <5fd685a4bea5a8565474e049ed32969d.NginxMailingListEnglish@forum.nginx.org> <6bf4973420f767174bd74e1a38cf8742.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 6 January 2014 20:34, justink101 wrote: > Consistently slow DNS lookups from > nginx I *really* don't think you've demonstrated anything that points to that conclusion. Do some tcpdump'ing. Show the data. Show your working. ;-) J From luky-37 at hotmail.com Mon Jan 6 22:04:58 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 6 Jan 2014 23:04:58 +0100 Subject: OT: OpenSSL 1.0.1f In-Reply-To: References: Message-ID: Hi, > It does not look like 1.0.1f changed the default behavior of > ENGINE_rdrand (coderman's been following it). Yes it did, rdrand is no longer enabled by default. Here [1] is the backport in the OpenSSL_1_0_1-stable head [2]. At least Debian [3] and Ubuntu backported this as well. Regards, Lukas [1] http://git.openssl.org/gitweb/?p=openssl.git;a=commit;h=1c2c5e402a757a63d690bd2390bd6b8b491ef184 [2] http://git.openssl.org/gitweb/?p=openssl.git;a=shortlog;h=refs/heads/OpenSSL_1_0_1-stable [3] http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=732710 From mdounin at mdounin.ru Tue Jan 7 02:31:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 7 Jan 2014 06:31:36 +0400 Subject: How do I disable DNS Caching and DNS Reverse Lookup in Nginx ? In-Reply-To: <959b925f65dbdba2780e44913888a117.NginxMailingListEnglish@forum.nginx.org> References: <20140102020701.GB95113@mdounin.ru> <959b925f65dbdba2780e44913888a117.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140107023136.GW95113@mdounin.ru> Hello! On Mon, Jan 06, 2014 at 12:35:46PM -0500, linuxr00lz2013 wrote: > Hello thank you for your reply! > > 1) I have shown you the real configuration and logs. All I changed was the > FQDN's because I dont know if I am allowed by my company to post them > online. The problem is that it makes configs and logs unusable for the purpose of tracing typos and dump misconfigurations like proxy loops. General recommendation for those who don't want to show names and ips in public is to reporoduce a problem in some test environment instead, and provide real configs and logs from this environment. > 2) Which tests do you recommend I run using telnet and curl? I am not too > familiar with using curl so any guidance will be greatly appreciated! Most trivial test is to do something like: $ time curl -o /dev/null http://example.com to see if it show the problem (i.e., if it's slow, and how slow it is). -- Maxim Dounin http://nginx.org/ From rob.stradling at comodo.com Tue Jan 7 09:59:03 2014 From: rob.stradling at comodo.com (Rob Stradling) Date: Tue, 07 Jan 2014 09:59:03 +0000 Subject: OT: OpenSSL 1.0.1f In-Reply-To: <52CB19EC.8040201@comodo.com> References: <52CB19EC.8040201@comodo.com> Message-ID: <52CBCFE7.3030906@comodo.com> On 06/01/14 21:02, Rob Stradling wrote: > On 06/01/14 20:40, Jeffrey Walton wrote: > >> There's also an Apple SecureTransport bug workaround. Apple's >> SecrureTransport does not properly negotiate ECDHE-ECDSA cipher >> suites. It affects Mac OS X and could affect iOS. It might be prudent >> to add SSL_OP_SAFARI_ECDHE_ECDSA_BUG by default. >> http://www.mail-archive.com/openssl-dev at openssl.org/msg32629.html. > > Nginx doesn't yet support multiple server certs per site (e.g. 1 RSA > cert and 1 ECC cert), so SSL_OP_SAFARI_ECDHE_ECDSA_BUG isn't yet useful. Actually I suppose that's not strictly true. Setting SSL_OP_SAFARI_ECDHE_ECDSA_BUG would be useful today on any Nginx server with an ECC cert and both ECDHE-ECDSA cipher(s) and ECDH-ECDSA cipher(s) enabled. (I don't suppose there are many such servers!) > (I was working on a patch for multiple server certs a few months ago; I > hope to find time to complete this very soon). -- Rob Stradling Senior Research & Development Scientist COMODO - Creating Trust Online From unixant at gmail.com Tue Jan 7 11:34:42 2014 From: unixant at gmail.com (SmallAnt) Date: Tue, 7 Jan 2014 19:34:42 +0800 Subject: how to submit my open source nginx module to nginx thrid party modules Message-ID: i want to submit my open souce nginx module to http://wiki.nginx.org/3rdPartyModules, how can i do ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 7 14:04:42 2014 From: nginx-forum at nginx.us (Ensiferous) Date: Tue, 07 Jan 2014 09:04:42 -0500 Subject: how to submit my open source nginx module to nginx thrid party modules In-Reply-To: References: Message-ID: Create an account and just edit the wiki page. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246089,246095#msg-246095 From nginx-forum at nginx.us Tue Jan 7 15:06:56 2014 From: nginx-forum at nginx.us (lodakai) Date: Tue, 07 Jan 2014 10:06:56 -0500 Subject: Request body in a module Message-ID: <6e3cdd7044c75bf17bd9837128024425.NginxMailingListEnglish@forum.nginx.org> Hi, have been trying to read the request body in my module, using the information I have gotten from this forum and some blogs and basically came up with the following functions as can be seen in this post http://forum.nginx.org/read.php?11,245953. The problem is that the request_body.bufs seem to contain no data. If I check the request body with the following conf: log_format my_tracking $request_body; location /dd { access_log logs/postdata.log my_tracking; proxy_pass http://127.0.0.1:8081/temp_service; } location /demo { access_log logs/postdata.log my_tracking; demo; } I get the request_boody logged in the first /dd post request but in the second /demo I do not get anything logged if I post data Use the following command the trying it out: curl -v --request POST -d "a=b" http://localhost:8080/dd Anyone that have an idea what can be wrong? Best Regards Christer Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246098,246098#msg-246098 From coderman at gmail.com Tue Jan 7 17:35:56 2014 From: coderman at gmail.com (coderman) Date: Tue, 7 Jan 2014 09:35:56 -0800 Subject: OT: OpenSSL 1.0.1f In-Reply-To: References: Message-ID: On Mon, Jan 6, 2014 at 2:04 PM, Lukas Tribus wrote: > Hi, > > >> It does not look like 1.0.1f changed the default behavior of >> ENGINE_rdrand (coderman's been following it). > > Yes it did, rdrand is no longer enabled by default. Here [1] is > the backport in the OpenSSL_1_0_1-stable head [2]. > > At least Debian [3] and Ubuntu backported this as well. OpenSSL makes ZERO mention of this fix anywhere in the 1.0.1f release itself, only the git history itself provides clue. Tor released an update to intentionally work around this issue with notice to relay and hidden service operators who may have been affected; Debian and Ubuntu disabled via backport, and explicitly called this out in their security errata (thank you all!). however, debian and ubuntu neglected to mention packages that may have been affected by generating long lived keys during a vulnerable configuration (boo!). in any case, end result: use 1.0.1f and be happy best regards, From coderman at gmail.com Tue Jan 7 17:41:19 2014 From: coderman at gmail.com (coderman) Date: Tue, 7 Jan 2014 09:41:19 -0800 Subject: OT: OpenSSL 1.0.1f In-Reply-To: References: Message-ID: On Tue, Jan 7, 2014 at 9:35 AM, coderman wrote: >... > in any case, end result: use 1.0.1f and be happy and if concerned that your OS distribution or upstream OpenSSL lacks this fix, confirm yourself via openssl-1.0.1f/crypto/engine/eng_rdrand.c in patched src if you see !ENGINE_set_flags(e, ENGINE_FLAGS_NO_REGISTER_ALL) in the near bottom of file static int bind_helper(ENGINE *e){} definition, then you are safe from accidental use. c.f. good ver: openssl-1.0.1f/crypto/engine/eng_rdrand.c static int bind_helper(ENGINE *e) { if (!ENGINE_set_id(e, engine_e_rdrand_id) || !ENGINE_set_name(e, engine_e_rdrand_name) || !ENGINE_set_flags(e, ENGINE_FLAGS_NO_REGISTER_ALL) || !ENGINE_set_init_function(e, rdrand_init) || !ENGINE_set_RAND(e, &rdrand_meth) ) return 0; return 1; } From nginx-forum at nginx.us Tue Jan 7 19:43:19 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 07 Jan 2014 14:43:19 -0500 Subject: OT: OpenSSL 1.0.1f In-Reply-To: References: Message-ID: <512120c3cad216d4e8532c01fa7d95de.NginxMailingListEnglish@forum.nginx.org> 1.0.1f against 1.5.9 mainline (today); ecp_nistputil.obj : warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library ecp_nistp521.obj : warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library ecp_nistp256.obj : warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library ecp_nistp224.obj : warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library fips_ers.obj : warning LNK4221: This object file does not define any previously undefined public symbols, so it will not be used by any link operation that consumes this library .\ssl\s23_clnt.c(286) : warning C4244: 'initializing' : conversion from 'time_t' to 'unsigned long', possible loss of data Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246071,246108#msg-246108 From nginx-forum at nginx.us Tue Jan 7 23:02:38 2014 From: nginx-forum at nginx.us (justink101) Date: Tue, 07 Jan 2014 18:02:38 -0500 Subject: proxy_pass check if 404, and return 404 Message-ID: <8d777d0738bdfd772d375cc2868004f4.NginxMailingListEnglish@forum.nginx.org> I am using proxy_pass to a dynamic subdomain: proxy_pass https://$remote_user.mydomain.io Is it possible to have the proxy_pass check if the response of https://$remote_user.mydomain.io is 404, if so, simply do: return 404; I.E. don't proxy, immediately return. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246112,246112#msg-246112 From contact at jpluscplusm.com Tue Jan 7 23:57:15 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 7 Jan 2014 23:57:15 +0000 Subject: proxy_pass check if 404, and return 404 In-Reply-To: <8d777d0738bdfd772d375cc2868004f4.NginxMailingListEnglish@forum.nginx.org> References: <8d777d0738bdfd772d375cc2868004f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: The *only* context I can see this being even slightly useful is where you are concerned about the unacceptably large size of the upstream server's 404 page. If this isn't your problem, I don't understand what you're trying to solve. From contact at jpluscplusm.com Wed Jan 8 00:10:16 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 8 Jan 2014 00:10:16 +0000 Subject: proxy_pass check if 404, and return 404 In-Reply-To: References: <8d777d0738bdfd772d375cc2868004f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 7 January 2014 23:57, Jonathan Matthews wrote: > The *only* context I can see this being even slightly useful is where > you are concerned about the unacceptably large size of the upstream > server's 404 page. > > If this isn't your problem, I don't understand what you're trying to solve. Either way, I should have said, look at the error_page directive. HTH, J From vbart at nginx.com Wed Jan 8 04:21:15 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 08 Jan 2014 08:21:15 +0400 Subject: proxy_pass check if 404, and return 404 In-Reply-To: References: <8d777d0738bdfd772d375cc2868004f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1862675.szrVqdqnOr@vbart-laptop> On Wednesday 08 January 2014 00:10:16 Jonathan Matthews wrote: > On 7 January 2014 23:57, Jonathan Matthews wrote: > > The *only* context I can see this being even slightly useful is where > > you are concerned about the unacceptably large size of the upstream > > server's 404 page. > > > > If this isn't your problem, I don't understand what you're trying to > > solve. > > Either way, I should have said, look at the error_page directive. ..and "proxy_intercept_errors". http://nginx.org/r/proxy_intercept_errors wbr, Valentin V. Bartenev From appa at perusio.net Wed Jan 8 08:11:43 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Wed, 8 Jan 2014 09:11:43 +0100 Subject: Problem trying to rewrite a URL In-Reply-To: References: Message-ID: AFAIK nginx has no directive for unescaping URLs. You could do it with Lua or go down a messy path of map directives listing all arguments with permutations. Rather what you need to do is fix your application so that it accepts escaped URLs. They're standard and there's no reason why it shouldn't IMO. Le 6 janv. 2014 05:55, "hoidulich" a ?crit : > Hi all, > > I have a problem trying to rewrite a URL. > > It should be pretty straightforward but it has been taking me hours > searching. > > Google indexed urls containing ";", which gets escaped to %3b: > > http://hoidulich.com/index.php?action=tagged%3bid=254715%3btag=vietholiday > (the original url that works is > http://hoidulich.com/index.php?action=tagged;id=254715;tag=vietholiday) > > How can I fix it? > Thank you very much. > Cao Tri. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246052,246052#msg-246052 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chalx.chalx at gmail.com Wed Jan 8 09:07:50 2014 From: chalx.chalx at gmail.com (chAlx) Date: Wed, 08 Jan 2014 13:07:50 +0400 Subject: Empty error log Message-ID: <52CD1566.5050506@gmail.com> I have installed Nginx for a small web server with static content. It works fine but the site error log file is always empty. All errors such as 404 are written to the access log only. File grants are the same for access and error logs. I've tried to change error_log param from "error" to "warn" and that didn't help. But changing it to "debug" was successful: log was filled with debug info (but no error info). My config files: /etc/nginx/nginx.conf: user www; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 768; } http { sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; server_tokens off; include /etc/nginx/mime.types; default_type application/octet-stream; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; gzip on; gzip_disable "msie6"; gzip_min_length 200; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/rdf+xml; include/etc/nginx/conf.d/*.conf; # None include/etc/nginx/sites-enabled/*; # One } /etc/nginx/sites-available/mysite.conf: server { listen 80; root /var/www/mysite; index index.html index.htm; server_name mysite.org; access_log /var/log/nginx/mysite_access.log combined; error_log /var/log/nginx/mysite_error.log error; log_subrequest on; location / { try_files $uri $uri/ =404; autoindex off; } } From vbart at nginx.com Wed Jan 8 09:19:15 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 08 Jan 2014 13:19:15 +0400 Subject: Empty error log In-Reply-To: <52CD1566.5050506@gmail.com> References: <52CD1566.5050506@gmail.com> Message-ID: <6095750.regsHMoXXa@vbart-laptop> On Wednesday 08 January 2014 13:07:50 chAlx wrote: > I have installed Nginx for a small web server with static content. It works > fine but the site error log file is always empty. All errors such as 404 > are written to the access log only. File grants are the same for access and > error logs. > > I've tried to change error_log param from "error" to "warn" and that didn't > help. But changing it to "debug" was successful: log was filled with debug > info (but no error info). > > My config files: [..] > /etc/nginx/sites-available/mysite.conf: > > server { > listen 80; > root /var/www/mysite; > index index.html index.htm; > server_name mysite.org; > access_log /var/log/nginx/mysite_access.log combined; > error_log /var/log/nginx/mysite_error.log error; > log_subrequest on; > location / { > try_files $uri $uri/ =404; You should remove "try_files" directive if you want to see file access errors in the error log. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Jan 8 09:22:40 2014 From: nginx-forum at nginx.us (rrrrcccc) Date: Wed, 08 Jan 2014 04:22:40 -0500 Subject: How to combine try_files with multiple proxy destinations Message-ID: <552e6b2529a268dc7b6ac5138e88128f.NginxMailingListEnglish@forum.nginx.org> I have the folllowing requirement: 1. if /usr/share/nginx/html/maintenance.html exists, then always show this file to browser. 2. if this is the static file which located in the sub-directories of /usr/share/nginx/html/, then show this static file to browser. 3. if the URI begins with /testapp1/, then proxy to http://127.0.0.1:8080; else proxy to http://127.0.0.1:8081 And I tried the following two configurations but both failed: 1. Config1 server { ... root /usr/share/nginx/html/; location / { try_files /maintenance.html $uri @proxy; } location @proxy { location /testapp/ { proxy_pass http://127.0.0.1:8080; } location / { proxy_pass http://127.0.0.1:8081 } } } 2. Config2 server { ... root /usr/share/nginx/html/; try_files /maintenance.html $uri @proxy; location /testapp/ { proxy_pass http://127.0.0.1:8080; } location / { proxy_pass http://127.0.0.1:8081 } } For Config1, it throws the following error: nginx: [emerg] location "/testapp/" cannot be inside the named location "@proxy" in /etc/nginx/nginx.conf:41 For Config2, it does not work as expected: it seems that when try_files is together with location directives, try_file has the lowerest priority and so I can never get /maintenance.html shown. So can anybody know how to config to meet my requirement ? Thanks in advance! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246125,246125#msg-246125 From artemrts at ukr.net Wed Jan 8 09:33:11 2014 From: artemrts at ukr.net (wishmaster) Date: Wed, 08 Jan 2014 11:33:11 +0200 Subject: Empty error log In-Reply-To: <52CD1566.5050506@gmail.com> References: <52CD1566.5050506@gmail.com> Message-ID: <1389173228.461303469.bne02fn8@frv34.ukr.net> --- Original message --- From: "chAlx" Date: 8 January 2014, 11:08:35 > I have installed Nginx for a small web server with static content. It works fine but the site error log file is always empty. All errors such as 404 are written to the access log only. File grants are the same for access and error logs. > > I've tried to change error_log param from "error" to "warn" and that didn't help. But changing it to "debug" was successful: log was filled with debug info (but no error info). > > My config files: > > > /etc/nginx/nginx.conf: > > user www; > worker_processes 2; > pid /var/run/nginx.pid; > > events { > worker_connections 768; > } > > http { > sendfile on; > tcp_nopush on; > tcp_nodelay on; > keepalive_timeout 65; > types_hash_max_size 2048; > server_tokens off; > include /etc/nginx/mime.types; > default_type application/octet-stream; > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > gzip on; > gzip_disable "msie6"; > gzip_min_length 200; > gzip_types text/plain text/css application/json > application/x-javascript text/xml application/xml application/xml+rss > text/javascript application/rdf+xml; > > include/etc/nginx/conf.d/*.conf; # None > include/etc/nginx/sites-enabled/*; # One > } > > > /etc/nginx/sites-available/mysite.conf: > > server { > listen 80; > root /var/www/mysite; > index index.html index.htm; > server_name mysite.org; > access_log /var/log/nginx/mysite_access.log combined; > error_log /var/log/nginx/mysite_error.log error; > log_subrequest on; > location / { > try_files $uri $uri/ =404; > autoindex off; > } Use try_files if you need rewriting rules. E.g. http://myshop.com/goods/foo/bar need to be rewritten in /index.php?router=goods/foo/bar and passed to php-server. From nginx-forum at nginx.us Wed Jan 8 10:08:22 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 08 Jan 2014 05:08:22 -0500 Subject: OT: OpenSSL 1.0.1f In-Reply-To: <512120c3cad216d4e8532c01fa7d95de.NginxMailingListEnglish@forum.nginx.org> References: <512120c3cad216d4e8532c01fa7d95de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7d25f3c0d938f0110e6f367f80f881f4.NginxMailingListEnglish@forum.nginx.org> itpp2012 Wrote: ------------------------------------------------------- > 1.0.1f against 1.5.9 mainline (today); > > .\ssl\s23_clnt.c(286) : warning C4244: 'initializing' : conversion > from 'time_t' to 'unsigned long', possible loss of data Also found by http://rt.openssl.org/Ticket/Display.html?id=3220 and fixed in .\ssl\s23_clnt.c(line 286): [...] int ssl_fill_hello_random(SSL *s, int server, unsigned char *result, int len) [...] if (send_time) { // unsigned long Time = time(NULL); unsigned long Time = (unsigned long)time(NULL); unsigned char *p = result; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246071,246128#msg-246128 From chencw1982 at gmail.com Wed Jan 8 14:49:33 2014 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Wed, 8 Jan 2014 22:49:33 +0800 Subject: [ANNOUNCE] Tengine-2.0.0 is released Message-ID: Hi folks, We are glad to announce that Tengine-2.0.0 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.0.0.tar.gz The highlights of this release are support for SPDY v3 (flow control), and enhanced DSO module. Tengine is now based on Nginx-1.4.4. The full changelog is as follows: *) Feature: now DSO module does not need the original source code or compiler options when compiling a new module. (monadbobo) *) Feature: added support for SPDY v3, and SPDY/HTTP servers can listen on the same port. (lilbedwin?chobits) *) Feature: added support for setting retries for upstream servers (proxy, memcached, fastcgi, scgi, uwsgi). (supertcy) *) Feature: now tfs module can report access status to rcs while keepalive. (zhcn381) *) Feature: now the directive "if" supports ">", "<", ">=", "<=" operators for numeric comparison. (flygoast) *) Feature: now upstream health check module uses keep-alive connections. (lilbedwin) *) Feature: now trim module can handle SSI and ESI comments properly. (taoyuanyuan) *) Feature: now directive "expires_by_types" supports wildcard such as "text/*". (zhcn381) *) Feature: added variables starting with "$base64_decode_" to encode variables in base64. (yzprofile) *) Feature: added variables starting with "$md5_encode_" to encode variables in md5. (yzprofile) *) Feature: added a variable "$time_http" to get the current HTTP time. (flygoast) *) Feature: added a variable "$full_request" to get the original request URL with scheme and host. (yzprofile) *) Feature: added variables starting with "$escape_uri_" to escape variables into formal URL syntax. (yzprofile) *) Feature: added a variable "$raw_uri" to get the original URI without arguments. (flygoast) *) Feature: added support for logging subrequests in nanoseconds. (jinglong) *) Feature: added a new API function to encode URL into base64. (lilbedwin) *) Change: merged changes between nginx-1.2.9 and nginx-1.4.4. (cfsego) *) Change: now stub_status module does not log subrequests. (jinglong) *) Bugfix: fixed a bug in footer module when reading a response with a "Content-Encoding" header. (yaoweibin) *) Bugfix: fixed a bug when "client_body_postpone_size" is set to 0. (yaoweibin) *) Bugfix: fixed a compilation warning of Lua module. (diwayou) For those who don't know Tengine, it is a free and open source distribution of Nginx with some advanced features. See our website for more details: http://tengine.taobao.org Have fun! Regards, -------------- next part -------------- An HTML attachment was scrubbed... URL: From j at jonathanleighton.com Wed Jan 8 14:57:18 2014 From: j at jonathanleighton.com (Jon Leighton) Date: Wed, 08 Jan 2014 14:57:18 +0000 Subject: proxy_cache incorrectly returning 304 Not Modified Message-ID: <52CD674E.5080007@jonathanleighton.com> Hi there, I work on a site which has nginx in front of a Rails application, and we use proxy_cache. For the home page, our application returns a "max-age=600, public" Cache-Control header, and we have nginx configured to cache the response using proxy_cache. This generally works fine, but last night nginx started to respond with 304 Not Modified to requests that *didn't* include any caching headers (If-Modified-Since or ETag). Our Pingdom alerts showed this issue, and here is the request/response captured: GET / HTTP/1.0 User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) Host: loco2.com 304 Not Modified Cache-Control: max-age=600, public Date: Tue, 07 Jan 2014 22:59:32 GMT ETag: "900e1f11422519337c9ed25fad299ce0" Server: nginx/1.4.4 Status: 304 Not Modified Strict-Transport-Security: max-age=31536000 X-Cache-Status: HIT X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN X-Request-Id: c7ee78ab-df49-4467-bced-753b2cc622ab X-UA-Compatible: chrome=1 X-XSS-Protection: 1; mode=block Connection: Close Does this look like a bug? Or could it be a configuration issue? I can't think of any reason why this should be the correct thing for the proxy cache to do. Many thanks, Jon -- http://jonathanleighton.com/ From denis.papathanasiou at gmail.com Thu Jan 9 01:15:47 2014 From: denis.papathanasiou at gmail.com (Denis Papathanasiou) Date: Wed, 8 Jan 2014 20:15:47 -0500 Subject: Time out errors using uwsgi with ngnix on debian 7 (wheezy) Message-ID: I've installed nginx via apt, using the nginx stable pkg as described here: http://nginx.org/en/linux_packages.html#stable It works perfectly for serving static files using the default configuration. Next, I installed uwsgi from source, as described here: https://pypi.python.org/pypi/uWSGI/1.2.3 When I do the steps in the python quickstart guide -- http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html -- and open my browser to localhost:9090, everything works as expected. When I change the nginx config file to use uwsgi_pass to localhost:9090 as described here -- http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html#putting-behind-a-full-webserver-- however, I get time out errors: > upstream timed out (110: Connection timed out) while reading response header from upstream It is as though nginx is *not* passing those requests to the uwsgi process (which is still running). Here is the content of server{ } inside the nginx config file: location / { include uwsgi_params; uwsgi_pass localhost:9090; } Any ideas on what the problem might be? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanotek at bsdbox.co Thu Jan 9 04:57:43 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 15:57:43 +1100 Subject: "Primary script unknown" wp-login.php Message-ID: <52CE2C47.8080905@bsdbox.co> As subject says: I cannot access wp-admin due to above [error]. Otherwise, site functions as it should. See error log: 2014/01/09 04:31:23 [error] 35759#0: *5254 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ipaddress, server: hostname, request: "GET /wordpress/wp-login.php HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "hostname", referrer: "http://hostname/" See access.log [09/Jan/2014:04:31:23 +0000] "GET /wordpress/wp-login.php HTTP/1.1" 404 27 "hostname" "useragent" "-" See nginx.conf user www; worker_processes 1; error_log logs/error.log info; pid /var/run/nginx.pid; events { worker_connections 768; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; tcp_nopush off; keepalive_timeout 65; gzip off; server { listen 80; listen 443 ssl; server_name hostname; root /usr/local/www; ssl_certificate /path/to/crt-chain.pem; ssl_certificate_key /path/to/privatekey.pem; ssl_dhparam /pth/to/dhparam4096.pem; server_name hostname www.hostnam; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; ssl_prefer_server_ciphers on; access_log logs/access.log main; charset utf-8; location / { root /usr/local/www/wordpress; try_files $uri $uri/ /index.php?q=$uri&$args; index index.php index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } location ~ \.(js|css|png|jpg|jpeg|gif|ico|html)$ { expires max; } location ~ \.php$ { root html; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/local/www/wordpress$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } } See fastcgi_params: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; fastcgi_param REDIRECT_STATUS 200; Please advise my mistake and how to fix. Thank you. From aidan at aodhandigital.com Thu Jan 9 05:16:52 2014 From: aidan at aodhandigital.com (Aidan Scheller) Date: Wed, 8 Jan 2014 23:16:52 -0600 Subject: OT: OpenSSL 1.0.1f In-Reply-To: <7d25f3c0d938f0110e6f367f80f881f4.NginxMailingListEnglish@forum.nginx.org> References: <512120c3cad216d4e8532c01fa7d95de.NginxMailingListEnglish@forum.nginx.org> <7d25f3c0d938f0110e6f367f80f881f4.NginxMailingListEnglish@forum.nginx.org> Message-ID: Does using the --with-openssl-opt="enable-ec_nistp_64_gcc_128" configure parameter without the *--with-openssl *cause a static version of OpenSSL to be created for Nginx? I'm unsure as the configuration summary then lists that the system library is being used. Thanks, Aidan On Wed, Jan 8, 2014 at 4:08 AM, itpp2012 wrote: > itpp2012 Wrote: > ------------------------------------------------------- > > 1.0.1f against 1.5.9 mainline (today); > > > > .\ssl\s23_clnt.c(286) : warning C4244: 'initializing' : conversion > > from 'time_t' to 'unsigned long', possible loss of data > > Also found by http://rt.openssl.org/Ticket/Display.html?id=3220 > and fixed in .\ssl\s23_clnt.c(line 286): > > [...] > int ssl_fill_hello_random(SSL *s, int server, unsigned char *result, int > len) > [...] > if (send_time) > { > // unsigned long Time = time(NULL); > unsigned long Time = (unsigned long)time(NULL); > unsigned char *p = result; > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246071,246128#msg-246128 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 9 05:52:16 2014 From: nginx-forum at nginx.us (humank) Date: Thu, 09 Jan 2014 00:52:16 -0500 Subject: How do i get the request body ? In-Reply-To: <20140103034033.GE95113@mdounin.ru> References: <20140103034033.GE95113@mdounin.ru> Message-ID: <5e58bfc08ec754417e22eb054208bfe3.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Wed, Jan 01, 2014 at 11:44:24PM -0500, humank wrote: > > > Hello guys, > > > > I'm developing a nginx module, the intent is to get the > request > > body, then write some response depends on what request body is. > > I've called the method ngx_http_read_client_request_body (r, > > ngx_http_myModule_handler); > > > > Since this code, i want to get the real request body in > > ngx_http_myModule_handler() > > Here are my codes ... > > > > void ngx_http_myModule_handler(ngx_http_request_t *r) > > { > > ngx_http_finalize_request(r, NGX_DONE); > > > > if(!(r->request_body->bufs == NULL)){ > > ngx_log_error(NGX_LOG_ERR, r->connection->log, 0, "request > is not > > empty."); > > > > } > > } > > > > the questions is , how can i get the r->request_body->bufs to > char * ? > > A request body is available as a series of > buffers in r->request_body->bufs. To understand more about > buffers, try reading Evan Miller's guide as available from here: > > http://www.evanmiller.org/nginx-modules-guide.html > > Some example code which uses r->request_body->bufs to access > request body contents as available in memory can be found in > src/http/ngx_http_variables.c, in the > ngx_http_variable_request_body() function. > > Note though, that depending on a configuration and a request, the > request body may not be available in memory at all (that is, it > will be in temporary file, and there will be a file buffer in > r->request_body->bufs). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi Maxim, Thanks for your reply, i have get the body from the sample code src/http/ngx_http_variables.c. Next time i will try to grep all the source code first while facing the unknown problems :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245952,246137#msg-246137 From nginx-forum at nginx.us Thu Jan 9 08:51:26 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 09 Jan 2014 03:51:26 -0500 Subject: OT: OpenSSL 1.0.1f In-Reply-To: References: Message-ID: Aidan Scheller Wrote: ------------------------------------------------------- > Does using the --with-openssl-opt="enable-ec_nistp_64_gcc_128" > configure parameter without the *--with-openssl *cause a static > version of > OpenSSL to be created for Nginx? I'm unsure as the configuration > summary > then lists that the system library is being used. Who is reporting which lib is used? openssl is compiled independently as part of the nginx build process where you can choose a static or dynamic build. What options does openssl makefile have after configure? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246071,246138#msg-246138 From Pekka.Panula at sofor.fi Thu Jan 9 09:29:11 2014 From: Pekka.Panula at sofor.fi (Pekka.Panula at sofor.fi) Date: Thu, 9 Jan 2014 11:29:11 +0200 Subject: SSL ciphers, disable or not to disable RC4? Message-ID: Hi My current values in my nginx configuration for ssl_protocols/ciphers what i use is this: ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; What are todays recommendations for ssl_ciphers option for supporting all current OSes and browsers, even Windows XP users with IE? Can i disable RC4? My nginx is compiled with OpenSSL v1.0.1. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nanotek at bsdbox.co Thu Jan 9 09:42:04 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 20:42:04 +1100 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: References: Message-ID: <52CE6EEC.10609@bsdbox.co> On 9/01/2014 8:29 PM, Pekka.Panula at sofor.fi wrote: > Hi > > My current values in my nginx configuration for ssl_protocols/ciphers > what i use is this: > > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers RC4:HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > What are todays recommendations for ssl_ciphers option for supporting > all current OSes and browsers, even Windows XP users with IE? > Can i disable RC4? > > My nginx is compiled with OpenSSL v1.0.1. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > The current consensus suggests that mitigating RC4 vulnerabilities is more important than BEAST attack concerns, which are all but mitigated client-side. If you want to deploy protocols to cater for a wide range of browsers (including XP IE) implement the following (that will fall-back to RC4 as a last resort): ssl_ciphers EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS +RC4 RC4 Otherwise, exclude RC4 with the following: ssl_ciphers EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4 -- syn.bsdbox.co From noloader at gmail.com Thu Jan 9 09:52:35 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 9 Jan 2014 04:52:35 -0500 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: References: Message-ID: On Thu, Jan 9, 2014 at 4:29 AM, wrote: > Hi > > My current values in my nginx configuration for ssl_protocols/ciphers what i > use is this: > > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers RC4:HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > What are todays recommendations for ssl_ciphers option for supporting all > current OSes and browsers, even Windows XP users with IE? > Can i disable RC4? > The paper of interest is from AlFardan, Bernstein, et al: "On the Security of RC4 in TLS and WPA" (http://cr.yp.to/streamciphers/rc4biases-20130708.pdf?). From the paper: ... While the RC4 algorithm is known to have a variety of cryptographic weaknesses (see [26] for an excellent survey), it has not been previously explored how these weaknesses can be exploited in the context of TLS. Here we show that new and recently discovered biases in the RC4 keystream do create serious vulnerabilities in TLS when using RC4 as its encryption algorithm. I don't believe there's a need for SSLv3 anymore either. TLSv1.0 is pretty much ubiquitous, and its at nearly 100% for modern browser, clients and servers. https://en.wikipedia.org/wiki/Transport_Layer_Security#Applications_and_adoption. You also migth want to include "!eNULL:!ADH:!ECADH:!MEDIUM:!LOW:!EXP'. eNULL is great for performance, but it has a few problems for privacy. From luky-37 at hotmail.com Thu Jan 9 09:53:03 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 9 Jan 2014 10:53:03 +0100 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: References: Message-ID: Hi, > My current values in my nginx configuration for ssl_protocols/ciphers > what i use is this: > > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers RC4:HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > What are todays recommendations for ssl_ciphers option for supporting > all current OSes and browsers, even Windows XP users with IE? > Can i disable RC4? Personally, I'm following Mozillas deployment recommendations: https://wiki.mozilla.org/Security/Server_Side_TLS Regards, Lukas From black.fledermaus at arcor.de Thu Jan 9 10:03:11 2014 From: black.fledermaus at arcor.de (basti) Date: Thu, 09 Jan 2014 11:03:11 +0100 Subject: Nginx as reverse Proxy, remove X-Frame-Options header Message-ID: <52CE73DF.80200@arcor.de> Hello, I have a closed-source Webapp that run on an IIS-Webserver and send a "X-Frame-Options: SAMEORIGIN" header. I also have to implement this Webapp in my own, Frame based Application. So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header is still send. How can I remove his header? I have try "proxy_hide_header X-Frame-Options;" without success. Regards, Basti From noloader at gmail.com Thu Jan 9 10:04:29 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 9 Jan 2014 05:04:29 -0500 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: References: Message-ID: On Thu, Jan 9, 2014 at 4:53 AM, Lukas Tribus wrote: >> My current values in my nginx configuration for ssl_protocols/ciphers >> what i use is this: >> >> ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; >> ssl_ciphers RC4:HIGH:!aNULL:!MD5; >> ssl_prefer_server_ciphers on; >> >> What are todays recommendations for ssl_ciphers option for supporting >> all current OSes and browsers, even Windows XP users with IE? >> Can i disable RC4? > > Personally, I'm following Mozillas deployment recommendations: > https://wiki.mozilla.org/Security/Server_Side_TLS Mozilla claims RC4 is "High Grade" encryption even though it has serious vulnerabilities when used in TLS (https://bugzilla.mozilla.org/show_bug.cgi?id=947149). They remove 3-key TDEA with 112-bits of security (which is currently approved by ECRYPT, ISO/IEC, NIST, and NESSIE). Related, their browser claim plain text HTTP is good (no user warnings), and HTTPS with a self signed is bad (big red flags for opportunistic encryption). When did plain text become better than cipher text? And they also rewarded Trustwave's bad behavior way back when (https://bugzilla.mozilla.org/show_bug.cgi?id=724929). I'm not sure I would follow Mozilla's lead. Jeff From noloader at gmail.com Thu Jan 9 10:04:31 2014 From: noloader at gmail.com (Jeffrey Walton) Date: Thu, 9 Jan 2014 05:04:31 -0500 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: References: Message-ID: On Thu, Jan 9, 2014 at 4:53 AM, Lukas Tribus wrote: >> My current values in my nginx configuration for ssl_protocols/ciphers >> what i use is this: >> >> ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; >> ssl_ciphers RC4:HIGH:!aNULL:!MD5; >> ssl_prefer_server_ciphers on; >> >> What are todays recommendations for ssl_ciphers option for supporting >> all current OSes and browsers, even Windows XP users with IE? >> Can i disable RC4? > > Personally, I'm following Mozillas deployment recommendations: > https://wiki.mozilla.org/Security/Server_Side_TLS Mozilla claims RC4 is "High Grade" encryption even though it has serious vulnerabilities when used in TLS (https://bugzilla.mozilla.org/show_bug.cgi?id=947149). They remove 3-key TDEA with 112-bits of security (which is currently approved by ECRYPT, ISO/IEC, NIST, and NESSIE). Related, their browser claim plain text HTTP is good (no user warnings), and HTTPS with a self signed is bad (big red flags for opportunistic encryption). When did plain text become better than cipher text? And they also rewarded Trustwave's bad behavior way back when (https://bugzilla.mozilla.org/show_bug.cgi?id=724929). I'm not sure I would follow Mozilla's lead. Jeff From contact at jpluscplusm.com Thu Jan 9 10:21:43 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 9 Jan 2014 10:21:43 +0000 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CE73DF.80200@arcor.de> References: <52CE73DF.80200@arcor.de> Message-ID: On 9 January 2014 10:03, basti wrote: > Hello, > > I have a closed-source Webapp that run on an IIS-Webserver and send a > "X-Frame-Options: SAMEORIGIN" header. > I also have to implement this Webapp in my own, Frame based Application. > > So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header > is still send. > How can I remove his header? > I have try "proxy_hide_header X-Frame-Options;" without success. You'll find the answer in the documentation: http://wiki.nginx.org/NginxHttpProxyModule#proxy_set_header J From nanotek at bsdbox.co Thu Jan 9 10:23:56 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 21:23:56 +1100 Subject: PHP below server root not served In-Reply-To: References: Message-ID: <52CE78BC.7050500@bsdbox.co> I am having trouble configuring nginx to serve up PHP from outside of the server document root. For example, this site's root is /usr/local/www/site1/wordpress and phpMyAdmin is located in /usr/local/www/phpMyAdmin. I cannot access servername.com/phpmyadmin. nginx logs the following error: ==================================================================== 2014/01/09 09:56:20 [error] 39387#0: *6160 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: clientIP, server: serverhostname, request: "GET /phpmyadmin/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "serverhostname" ==================================================================== The WordPress site, however, is served without error. Therefore, nginx is refusing to serve PHP from outside of the server document root. Please see the following configuration: nginx.conf ==================================================================== user www www; worker_processes 1; pid /var/run/nginx.pid; error_log logs/error.log info; events { worker_connections 768; use kqueue; } http { include mime.types; default_type application/octet-stream; access_log logs/access.log main; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; listen 443 ssl; server_name servername.com; ssl_certificate crt-chain.pem; ssl_certificate_key key.pem; ssl_dhparam dhparam4096.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !a NULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; ssl_prefer_server_ciphers on; root /usr/local/www/site1/wordpress; index index.php; location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /usr/local/www/site1/wordpress$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } #----------------------PROBLEM AREA----------------------# location /phpmyadmin/ { alias /usr/local/www/phpMyAdmin/; index index.php index.html; } location ~ ^/phpmyadmin/(.*\.php)$ { fastcgi_param PHP_ADMIN_VALUE open_basedir=/usr/local/www/phpMyAdmin; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/local/www/phpMyAdmin$fastcgi_script_name; include fastcgi_params; #----------------------PROBLEM AREA----------------------# } } } ==================================================================== I am obviously lacking some required configuration. Perhaps in nginx.conf, php-fpm.conf, or php.ini. Could someone please advise me of my error and how to correct it? Thank you. Strangely, in Apache, I simply required an alias entry, such as: Alias /phpmyadmin "/usr/local/www/phpMyAdmin" Options None AllowOverride None Require all granted in my httpd.conf file even when the server root was above this location and with the exact same PHP settings (php.ini) as I now have with nginx. Server intel: # uname -a FreeBSD hostname 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898: Thu Sep 26 22:50:31 UTC 2013 root at bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 # nginx -V nginx version: nginx/1.5.7 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx-error.log --user=www --group=www --with-ipv6 --with-google_perftools_module --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx-access.log --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_stub_status_module --with-http_sub_module --with-http_xslt_module --with-pcre --with-http_spdy_module --with-mail --with-mail_ssl_module --with-http_ssl_module -- syn.bsdbox.co From r1ch+nginx at teamliquid.net Thu Jan 9 10:27:58 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Thu, 9 Jan 2014 11:27:58 +0100 Subject: PHP below server root not served In-Reply-To: <52CE78BC.7050500@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> Message-ID: > fastcgi_pass unix:/tmp/php-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /usr/local/www/phpMyAdmin$fastcgi_script_name; > include fastcgi_params; > What's in your fastcgi_params? Is it overriding your SCRIPT_FILENAME perhaps? From nanotek at bsdbox.co Thu Jan 9 10:32:12 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 21:32:12 +1100 Subject: PHP below server root not served In-Reply-To: References: <52CE78BC.7050500@bsdbox.co> Message-ID: <52CE7AAC.7030205@bsdbox.co> On 9/01/2014 9:27 PM, Richard Stanway wrote: >> fastcgi_pass unix:/tmp/php-fpm.sock; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME >> /usr/local/www/phpMyAdmin$fastcgi_script_name; >> include fastcgi_params; >> > > What's in your fastcgi_params? Is it overriding your SCRIPT_FILENAME perhaps? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Thanks for replying. Here is my fastcgi_params file: fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; It is the default nginx file, unaltered. -- syn.bsdbox.co From francis at daoine.org Thu Jan 9 10:55:31 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Jan 2014 10:55:31 +0000 Subject: PHP below server root not served In-Reply-To: <52CE78BC.7050500@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> Message-ID: <20140109105531.GB19804@craic.sysops.org> On Thu, Jan 09, 2014 at 09:23:56PM +1100, nano wrote: Hi there, > The WordPress site, however, is served without error. Therefore, nginx > is refusing to serve PHP from outside of the server document root. nginx doesn't serve php. nginx tells the fastcgi server what you configure it to tell. (That's an important difference from what Apache does.) > Please see the following configuration: One request is handled in one location. For this request, the one location that you want to be used is not the one that nginx actually uses. > location / { > location ~ \.php$ { > location /phpmyadmin/ { > location ~ ^/phpmyadmin/(.*\.php)$ { > I am obviously lacking some required configuration. Perhaps in > nginx.conf, php-fpm.conf, or php.ini. Could someone please advise me of > my error and how to correct it? Thank you. http://nginx.org/r/location A request for /phpmyadmin/index.php will be handled in the second location above, not the fourth. Re-arrange the config file. (I'd suggest using "location ^~ /phpmyadmin/", and inside that using "location ~ \.php$"; but just re-ordering the regex blocks that you have should cause the location that you want to be chosen.) f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jan 9 11:01:24 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Jan 2014 11:01:24 +0000 Subject: "Primary script unknown" wp-login.php In-Reply-To: <52CE2C47.8080905@bsdbox.co> References: <52CE2C47.8080905@bsdbox.co> Message-ID: <20140109110124.GC19804@craic.sysops.org> On Thu, Jan 09, 2014 at 03:57:43PM +1100, nano wrote: Hi there, > As subject says: I cannot access wp-admin due to above [error]. > Otherwise, site functions as it should. > location ~ \.php$ { > fastcgi_param SCRIPT_FILENAME > /usr/local/www/wordpress$fastcgi_script_name; > } The request /wordpress/wp-login.php should be handled in your "location ~ \.php$" block, where you tell nginx to tell the fasctcgi server to process the file /usr/local/www/wordpress/wordpress/wp-login.php. Is that the file that you want the fastcgi server to process? f -- Francis Daly francis at daoine.org From nanotek at bsdbox.co Thu Jan 9 11:44:35 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 22:44:35 +1100 Subject: PHP below server root not served In-Reply-To: <20140109105531.GB19804@craic.sysops.org> References: <52CE78BC.7050500@bsdbox.co> <20140109105531.GB19804@craic.sysops.org> Message-ID: <52CE8BA3.6070207@bsdbox.co> On 9/01/2014 9:55 PM, Francis Daly wrote: > On Thu, Jan 09, 2014 at 09:23:56PM +1100, nano wrote: > > Hi there, > > One request is handled in one location. > > For this request, the one location that you want to be used is not the > one that nginx actually uses. > >> location / { >> location ~ \.php$ { >> location /phpmyadmin/ { >> location ~ ^/phpmyadmin/(.*\.php)$ { > > > http://nginx.org/r/location > > A request for /phpmyadmin/index.php will be handled in the second location > above, not the fourth. > > Re-arrange the config file. > > (I'd suggest using "location ^~ /phpmyadmin/", and inside that using > "location ~ \.php$"; but just re-ordering the regex blocks that you have > should cause the location that you want to be chosen.) > > Hi. Thank you for your response. I had previously read the documentation you reference but I am afraid I am none the wiser, likely due to my own failure to comprehend. Similarly, I am finding it difficult to implement your suggestion. Would you please provide an example of this arrangement I should have? I attempted multiple variations of what I believed your instructions suggested (nesting \.php$ location inside the /phpmyadmin location); such as: location ^~ /phpmyadmin { alias /usr/local/www/phpMyAdmin/; fastcgi_param DOCUEMNT_ROOT /usr/local/www/phpMyAdmin; fastcgi_param PATH_INFO $fastcgi_script_name; location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } I implemented many varieties of this location nesting. All resulted in the same inability to access the URI: sitename.com/phpmyadmin. But also made the WordPress site (servername.com) unavailable. Instead, it presented a dialog offering to download the 'application/octet-stream'. Please provide the configuration you suggest. Thank you. -- syn.bsdbox.co From mdounin at mdounin.ru Thu Jan 9 11:44:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 15:44:57 +0400 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CE73DF.80200@arcor.de> References: <52CE73DF.80200@arcor.de> Message-ID: <20140109114457.GA1835@mdounin.ru> Hello! On Thu, Jan 09, 2014 at 11:03:11AM +0100, basti wrote: > Hello, > > I have a closed-source Webapp that run on an IIS-Webserver and send a > "X-Frame-Options: SAMEORIGIN" header. > I also have to implement this Webapp in my own, Frame based Application. > > So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header > is still send. > How can I remove his header? > I have try "proxy_hide_header X-Frame-Options;" without success. The proxy_hide_header directive is expected to work (and works here, just tested). If it doesn't work for you, you may want to provide more details, see http://wiki.nginx.org/Debugging for some hints. -- Maxim Dounin http://nginx.org/ From nanotek at bsdbox.co Thu Jan 9 11:46:32 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 22:46:32 +1100 Subject: "Primary script unknown" wp-login.php In-Reply-To: <20140109110124.GC19804@craic.sysops.org> References: <52CE2C47.8080905@bsdbox.co> <20140109110124.GC19804@craic.sysops.org> Message-ID: <52CE8C18.9040709@bsdbox.co> On 9/01/2014 10:01 PM, Francis Daly wrote: > On Thu, Jan 09, 2014 at 03:57:43PM +1100, nano wrote: > > Hi there, > >> As subject says: I cannot access wp-admin due to above [error]. >> Otherwise, site functions as it should. > >> location ~ \.php$ { >> fastcgi_param SCRIPT_FILENAME >> /usr/local/www/wordpress$fastcgi_script_name; >> } > > The request /wordpress/wp-login.php should be handled in your "location > ~ \.php$" block, where you tell nginx to tell the fasctcgi server to > process the file /usr/local/www/wordpress/wordpress/wp-login.php. > > Is that the file that you want the fastcgi server to process? > > f > I resolved this problem by making the /wordpress directory the server root. However, I now have the problem of /usr/local/www/phpMyAdmin being inaccessible, due to the same error. -- syn.bsdbox.co From mdounin at mdounin.ru Thu Jan 9 11:57:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 15:57:32 +0400 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: References: <52CE73DF.80200@arcor.de> Message-ID: <20140109115732.GB1835@mdounin.ru> Hello! On Thu, Jan 09, 2014 at 10:21:43AM +0000, Jonathan Matthews wrote: > On 9 January 2014 10:03, basti wrote: > > Hello, > > > > I have a closed-source Webapp that run on an IIS-Webserver and send a > > "X-Frame-Options: SAMEORIGIN" header. > > I also have to implement this Webapp in my own, Frame based Application. > > > > So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header > > is still send. > > How can I remove his header? > > I have try "proxy_hide_header X-Frame-Options;" without success. > > You'll find the answer in the documentation: > http://wiki.nginx.org/NginxHttpProxyModule#proxy_set_header The X-Frame-Options header is returned by a server-side application, hence the proxy_hide_header is correct solution, while proxy_set_header isn't. And, being pedantic, wiki != documentation. Here are links to the documentation: http://nginx.org/r/proxy_set_header http://nginx.org/r/proxy_hide_header -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Thu Jan 9 12:12:09 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 9 Jan 2014 12:12:09 +0000 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <20140109115732.GB1835@mdounin.ru> References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> Message-ID: On 9 January 2014 11:57, Maxim Dounin wrote: > Hello! > > On Thu, Jan 09, 2014 at 10:21:43AM +0000, Jonathan Matthews wrote: > >> On 9 January 2014 10:03, basti wrote: >> > Hello, >> > >> > I have a closed-source Webapp that run on an IIS-Webserver and send a >> > "X-Frame-Options: SAMEORIGIN" header. >> > I also have to implement this Webapp in my own, Frame based Application. >> > >> > So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header >> > is still send. >> > How can I remove his header? >> > I have try "proxy_hide_header X-Frame-Options;" without success. >> >> You'll find the answer in the documentation: >> http://wiki.nginx.org/NginxHttpProxyModule#proxy_set_header > > The X-Frame-Options header is returned by a server-side > application, hence the proxy_hide_header is correct solution, > while proxy_set_header isn't. My bad. I was pretty sure I'd had success with 'set foo ""' where 'hide' hadn't worked in the past. > And, being pedantic, wiki != documentation. Here are > links to the documentation: > > http://nginx.org/r/proxy_set_header > http://nginx.org/r/proxy_hide_header Ack that. I'll personally keep linking to the wiki until the documentation * is significantly better internally hyper-linked; * has documentation targeted soley towards the open source nginx, without having to skip to the end of each directive to check for "This functionality is available as part of our commercial subscription only"; * has useful pages such as IfIsEvil integrated into it. I may be wrong about that third one still needing doing, but I couldn't find IfIsEvil anywhere but the wiki. The presence of a top-level pointer on each wiki page to http://nginx.org/en/docs/ is pretty useless, too - it needs to point to the appropriate place in the docs to get people to start using them. Didn't you guys pick up several millions a while ago, which was announced as being somewhat earmarked for improving documentation? :-) From nanotek at bsdbox.co Thu Jan 9 12:24:03 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 23:24:03 +1100 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> Message-ID: <52CE94E3.5090608@bsdbox.co> On 9/01/2014 11:12 PM, Jonathan Matthews wrote: > On 9 January 2014 11:57, Maxim Dounin wrote: >> Hello! >> >> On Thu, Jan 09, 2014 at 10:21:43AM +0000, Jonathan Matthews wrote: >> >>> On 9 January 2014 10:03, basti wrote: >>>> Hello, >>>> >>>> I have a closed-source Webapp that run on an IIS-Webserver and send a >>>> "X-Frame-Options: SAMEORIGIN" header. >>>> I also have to implement this Webapp in my own, Frame based Application. >>>> >>>> So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header >>>> is still send. >>>> How can I remove his header? >>>> I have try "proxy_hide_header X-Frame-Options;" without success. >>> >>> You'll find the answer in the documentation: >>> http://wiki.nginx.org/NginxHttpProxyModule#proxy_set_header >> >> The X-Frame-Options header is returned by a server-side >> application, hence the proxy_hide_header is correct solution, >> while proxy_set_header isn't. > > My bad. I was pretty sure I'd had success with 'set foo ""' where > 'hide' hadn't worked in the past. > >> And, being pedantic, wiki != documentation. Here are >> links to the documentation: >> >> http://nginx.org/r/proxy_set_header >> http://nginx.org/r/proxy_hide_header > > Ack that. I'll personally keep linking to the wiki until the documentation > > * is significantly better internally hyper-linked; > * has documentation targeted soley towards the open source nginx, > without having to skip to the end of each directive to check for "This > functionality is available as part of our commercial subscription > only"; > * has useful pages such as IfIsEvil integrated into it. > > I may be wrong about that third one still needing doing, but I > couldn't find IfIsEvil anywhere but the wiki. The presence of a > top-level pointer on each wiki page to http://nginx.org/en/docs/ is > pretty useless, too - it needs to point to the appropriate place in > the docs to get people to start using them. > > Didn't you guys pick up several millions a while ago, which was > announced as being somewhat earmarked for improving documentation? :-) > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > I share your opinion regarding nginx documentation. It is woeful. Particularly when compared to other exemplary open source projects, such as Postfix and FreeBSD. My inability to easily transfer my webservers to nginx from Apache, due to (my own shortcomings compounded by) terribly inadequate documentation, nearly made the transition impossible. Insult was only added to injury when, after transferring some sites to the recommended nginx, I found very little performance enhancement. Admittedly, I am most probably not properly utilizing the application and will only see improvements when my own abilities allow it. Nevertheless, the documentation needs work. It is prudent to accommodate less technically aware users. -- syn.bsdbox.co From nanotek at bsdbox.co Thu Jan 9 12:41:32 2014 From: nanotek at bsdbox.co (nano) Date: Thu, 09 Jan 2014 23:41:32 +1100 Subject: PHP below server root not served In-Reply-To: <52CE78BC.7050500@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> Message-ID: <52CE98FC.8000405@bsdbox.co> On 9/01/2014 9:23 PM, nano wrote: > I am having trouble configuring nginx to serve up PHP from outside of > the server document root. For example, this site's root is > /usr/local/www/site1/wordpress and phpMyAdmin is located in > /usr/local/www/phpMyAdmin. I cannot access servername.com/phpmyadmin. > nginx logs the following error: > > ==================================================================== > 2014/01/09 09:56:20 [error] 39387#0: *6160 FastCGI sent in stderr: > "Primary script unknown" while reading response header from upstream, > client: clientIP, server: serverhostname, request: "GET /phpmyadmin/ > HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: > "serverhostname" > ==================================================================== > > The WordPress site, however, is served without error. Therefore, nginx > is refusing to serve PHP from outside of the server document root. > Please see the following configuration: > > > nginx.conf > ==================================================================== > user www www; > worker_processes 1; > pid /var/run/nginx.pid; > error_log logs/error.log info; > > events { > worker_connections 768; > use kqueue; > } > > http { > include mime.types; > default_type application/octet-stream; > > access_log logs/access.log main; > sendfile on; > keepalive_timeout 65; > gzip on; > > server { > listen 80; > listen 443 ssl; > server_name servername.com; > ssl_certificate crt-chain.pem; > ssl_certificate_key key.pem; > ssl_dhparam dhparam4096.pem; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM > EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 > EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !a > NULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; > ssl_prefer_server_ciphers on; > > root /usr/local/www/site1/wordpress; > index index.php; > > location / { > try_files $uri $uri/ /index.php?$args; > } > > location ~ \.php$ { > fastcgi_pass unix:/var/run/php-fpm.sock; > fastcgi_param SCRIPT_FILENAME > /usr/local/www/site1/wordpress$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_script_name; > include fastcgi_params; > } > > #----------------------PROBLEM AREA----------------------# > > location /phpmyadmin/ { > alias /usr/local/www/phpMyAdmin/; > index index.php index.html; > } > > location ~ ^/phpmyadmin/(.*\.php)$ { > fastcgi_param PHP_ADMIN_VALUE > open_basedir=/usr/local/www/phpMyAdmin; > fastcgi_pass unix:/tmp/php-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > /usr/local/www/phpMyAdmin$fastcgi_script_name; > include fastcgi_params; > > #----------------------PROBLEM AREA----------------------# > > } > } > } > ==================================================================== > > > I am obviously lacking some required configuration. Perhaps in > nginx.conf, php-fpm.conf, or php.ini. Could someone please advise me of > my error and how to correct it? Thank you. > > Strangely, in Apache, I simply required an alias entry, such as: > > Alias /phpmyadmin "/usr/local/www/phpMyAdmin" > > Options None > AllowOverride None > Require all granted > > > in my httpd.conf file even when the server root was above this location > and with the exact same PHP settings (php.ini) as I now have with nginx. > > Server intel: > # uname -a > FreeBSD hostname 9.2-RELEASE FreeBSD 9.2-RELEASE #0 r255898: Thu Sep 26 > 22:50:31 UTC 2013 > root at bake.isc.freebsd.org:/usr/obj/usr/src/sys/GENERIC amd64 > > # nginx -V > nginx version: nginx/1.5.7 > TLS SNI support enabled > configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I > /usr/local/include' --with-ld-opt='-L /usr/local/lib' > --conf-path=/usr/local/etc/nginx/nginx.conf > --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid > --error-log-path=/var/log/nginx-error.log --user=www --group=www > --with-ipv6 --with-google_perftools_module > --http-client-body-temp-path=/var/tmp/nginx/client_body_temp > --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp > --http-proxy-temp-path=/var/tmp/nginx/proxy_temp > --http-scgi-temp-path=/var/tmp/nginx/scgi_temp > --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp > --http-log-path=/var/log/nginx-access.log --with-http_addition_module > --with-http_auth_request_module --with-http_dav_module > --with-http_flv_module --with-http_geoip_module > --with-http_gzip_static_module --with-http_gunzip_module > --with-http_image_filter_module --with-http_mp4_module > --with-http_perl_module --with-http_random_index_module > --with-http_realip_module --with-http_secure_link_module > --with-http_stub_status_module --with-http_sub_module > --with-http_xslt_module --with-pcre --with-http_spdy_module --with-mail > --with-mail_ssl_module --with-http_ssl_module > > I seem to have fixed this problem. Amended nginx.conf file: ==================================================================== user www www; worker_processes 1; pid /var/run/nginx.pid; error_log logs/error.log info; events { worker_connections 768; use kqueue; } http { include mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log logs/access.log main; sendfile on; keepalive_timeout 65; gzip on; server { listen 80; listen 443 ssl; server_name srvname.com www.srvname.com; ssl_certificate crt-chain.pem; ssl_certificate_key key.pem; ssl_dhparam dhparam4096.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; ssl_prefer_server_ciphers on; root /usr/local/www/site1/wordpress; index index.php; location /phpmyadmin { alias /usr/local/www/phpMyAdmin/; index index.php index.html; } location ~ ^/phpmyadmin/(.*\.php)$ { root /usr/local/www/phpMyAdmin/; fastcgi_pass unix:/var/run/php-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /usr/local/www/phpMyAdmin/$1; fastcgi_param DOCUMENT_ROOT /usr/local/www/phpMyAdmin; } location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_param SCRIPT_FILENAME /usr/local/www/site1/wordpress$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } } } ==================================================================== Admittedly, I don't know *why* what I changed fixed the problem, but it did. I relocated the phpMyAdmin entries to above the "location /" block from beneath the "location /" block. From: http { server { root; location / { try_files; ... } location ~ \.php$ { fastcgi_pass ... } location /phpmyadmin { alias; ... } location ~ ^/phpmyadmin/(.*\.php)$ { root; ... } location = /50x.html; { root; ... } } } } to: http { server { root; location /phpmyadmin { alias; ... } location ~ ^/phpmyadmin/(.*\.php)$ { root; ... } location / { try_files; ... } location ~ \.php$ { fastcgi_pass ... } location = /50x.html; { root; ... } } } } The syntax is identical, just those two location blocks are in a different place. I would like to know why this works, but am just happy that it does. I look forward to better understanding this great program. Thank you, all, for your participation. -- syn.bsdbox.co From contact at jpluscplusm.com Thu Jan 9 12:47:39 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 9 Jan 2014 12:47:39 +0000 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CE94E3.5090608@bsdbox.co> References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> <52CE94E3.5090608@bsdbox.co> Message-ID: On 9 January 2014 12:24, nano wrote: > I share your opinion regarding nginx documentation. It is woeful. Sorry chap - I didn't say that and I don't think that. There may well be some specific target audiences not well served by the aggregate of the current (psuedo-)documentation sources, but that doesn't mean they themselves are /that/ bad. My problems with them are specific and fixable, and not just "make it better!" J From mdounin at mdounin.ru Thu Jan 9 12:48:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 16:48:56 +0400 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> Message-ID: <20140109124856.GE1835@mdounin.ru> Hello! On Thu, Jan 09, 2014 at 12:12:09PM +0000, Jonathan Matthews wrote: > On 9 January 2014 11:57, Maxim Dounin wrote: > > Hello! > > > > On Thu, Jan 09, 2014 at 10:21:43AM +0000, Jonathan Matthews wrote: > > > >> On 9 January 2014 10:03, basti wrote: > >> > Hello, > >> > > >> > I have a closed-source Webapp that run on an IIS-Webserver and send a > >> > "X-Frame-Options: SAMEORIGIN" header. > >> > I also have to implement this Webapp in my own, Frame based Application. > >> > > >> > So I try to use nginx as a reverse Proxy, but the X-Frame-Options Header > >> > is still send. > >> > How can I remove his header? > >> > I have try "proxy_hide_header X-Frame-Options;" without success. > >> > >> You'll find the answer in the documentation: > >> http://wiki.nginx.org/NginxHttpProxyModule#proxy_set_header > > > > The X-Frame-Options header is returned by a server-side > > application, hence the proxy_hide_header is correct solution, > > while proxy_set_header isn't. > > My bad. I was pretty sure I'd had success with 'set foo ""' where > 'hide' hadn't worked in the past. > > > And, being pedantic, wiki != documentation. Here are > > links to the documentation: > > > > http://nginx.org/r/proxy_set_header > > http://nginx.org/r/proxy_hide_header > > Ack that. I'll personally keep linking to the wiki until the documentation > > * is significantly better internally hyper-linked; > * has documentation targeted soley towards the open source nginx, > without having to skip to the end of each directive to check for "This > functionality is available as part of our commercial subscription > only"; > * has useful pages such as IfIsEvil integrated into it. > > I may be wrong about that third one still needing doing, but I > couldn't find IfIsEvil anywhere but the wiki. The presence of a > top-level pointer on each wiki page to http://nginx.org/en/docs/ is > pretty useless, too - it needs to point to the appropriate place in > the docs to get people to start using them. > > Didn't you guys pick up several millions a while ago, which was > announced as being somewhat earmarked for improving documentation? :-) And that's why we actually have the documentation in English. :) Additionally, compared to what we have previously it is already significantly imporoved. As I already explained, the problem with wiki pages which duplication documentation is a bit rot. There are lots of improvements in the documentation which isn't in wiki, most obviously - there are no new directives. And this bit rot confuse people more and more. The generic plan is to avoid the duplication altogether, preserving wiki for useful additional content. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Jan 9 12:57:32 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 9 Jan 2014 13:57:32 +0100 Subject: PHP below server root not served In-Reply-To: <52CE98FC.8000405@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> Message-ID: Try to understand what you are doing first. One request is handled in one location. >> >> For this request, the one location that you want to be used is not the >> one that nginx actually uses. >> >> >>> ?1. ? >>> location / { >>> >>> ?2. ? >>> location ~ \.php$ { >>> >>> ?3. ? >>> location /phpmyadmin/ { >>> >>> ?4. ? >>> location ~ ^/phpmyadmin/(.*\.php)$ { >>> >> >> >> http://nginx.org/r/location >> >> A request for /phpmyadmin/index.php will be handled in the second location >> above, not the fourth. >> > ?The docs say 'To find location matching a given request, nginx first checks locations defined using the prefix strings (prefix locations). Among them, the location with the longest matching prefix is selected and remembered. Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used. If no match with a regular expression is found then the configuration of the prefix location remembered earlier is used.' Thus, in your configuration, for a '/phpmyadmin/***.php' request, it does the following: * Start searching prefix location * 1. location / 3. location /phpmyadmin/ * End of prefix location search, longest prefix = 3. * * Start searching regex expressions 2. location ~\.php$ // First regex found, stop of search * End of prefix location search, regex found is being used * // Otherwise, if no matching regex were to be found, a fallback to the longest prefix location found before would have applied Your problem is that the location 4. is *never* used, because the regex being used is the first which matches. Your generic '\.php$' will catch'em all! ? ?Francis provided 2 ways of fixing your problem: I. Re-arranging your config file so 'location /phpmyadmin/(.*\.php)$' is found *before* \.php$? On Thu, Jan 9, 2014 at 1:41 PM, nano wrote: > Admittedly, I don't know *why* what I changed fixed the problem, but it > did. I relocated the phpMyAdmin entries to above the "location /" block > from beneath the "location /" block. > > ?* snip * > > The syntax is identical, just those two location blocks are in a different > place. I would like to know why this works, but am just happy that it does. > I look forward to better understanding this great program. ? You did just that... II. Use a smarter (and more scalable, in light of future adds to the nginx config) way, which is nesting the rules of 'location /phpmyadmin/(.*\.php)$' in a 'location ~\.php$' block embedded in a 'location ^~ /phpmyadmin/' block. Note the modification made to the prefix block for phpmyadmin. The docs say 'If the longest matching prefix location has the ?^~? modifier then regular expressions are not checked.' This way, longest prefix will be 'location /phpmyadmin/' *but the generic 'location \.php$' won't be used* since there will be no regex search. Hope I helped, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 9 13:18:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 17:18:09 +0400 Subject: issue with `default_type` & `type` on 1.5.7 In-Reply-To: <663B722A-E6F3-40CD-B37D-4CD53FCD4956@2xlp.com> References: <36cb46b6f59688950730d92fbdb0a260.NginxMailingListEnglish@forum.nginx.org> <663B722A-E6F3-40CD-B37D-4CD53FCD4956@2xlp.com> Message-ID: <20140109131808.GH1835@mdounin.ru> Hello! On Sat, Jan 04, 2014 at 03:36:33PM -0500, Jonathan Vanasco wrote: > I recently encountered an issue with a 1.5.7 branch on OSX. i did not check 1.5.8 > > The following code will set ALL css/js files as the default_type > > include /usr/local/nginx/conf/mime.types; > default_type application/octet-stream; > > > The following code works as intended > > default_type application/octet-stream; > include /usr/local/nginx/conf/mime.types; > > I haven't had time to test on other versions. > > This could be the intended behavior, but the docs don't suggest > that. usually a default_type only applies when the real type > can't be found. Most likely there is a directive with some non-strict syntax (like "index") before they configuration you've quoted, and there is no semicolon after it. E.g., something like this: index index.html include /usr/local/nginx/conf/mime.types; default_type application/octet-stream; While the configuration looks like correct one with mime.types included, it's instead defines 3 index files ("index.html", "include", "/usr/local/nginx/conf/mime.types"), and default_type - but mime.types isn't included. -- Maxim Dounin http://nginx.org/ From nanotek at bsdbox.co Thu Jan 9 13:18:36 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 00:18:36 +1100 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> <52CE94E3.5090608@bsdbox.co> Message-ID: <52CEA1AC.1030704@bsdbox.co> On 9/01/2014 11:47 PM, Jonathan Matthews wrote: > On 9 January 2014 12:24, nano wrote: >> I share your opinion regarding nginx documentation. It is woeful. > > Sorry chap - I didn't say that and I don't think that. There may well > be some specific target audiences not well served by the aggregate of > the current (psuedo-)documentation sources, but that doesn't mean they > themselves are /that/ bad. My problems with them are specific and > fixable, and not just "make it better!" > > J > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Nonetheless, I find the official documentation lacking. It is good that you do not. Fortunately, there are alternative resources that make deployment of nginx servers achievable for users lacking technical proficiency, like myself. -- syn.bsdbox.co From andre at digirati.com.br Thu Jan 9 13:35:17 2014 From: andre at digirati.com.br (Andre Nathan) Date: Thu, 09 Jan 2014 11:35:17 -0200 Subject: Nginx, Lua and blocking libraries Message-ID: <52CEA595.2040505@digirati.com.br> Hello I'm considering the possibility of implementing a project using Nginx and the Lua module. One of the requirements of the project is that the code must use an embedded database such as SQLite. However, as known, using the lua-sqlite3 library directly is not optimal because it would block the Nginx worker process. My question is, is there a way to work around this in any way? For example, creating a coroutine to run the lua-sqlite3 calls? If not, does anyone know of some other embedded database that works well with the Lua Nginx module? Thank you in advance, Andre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From nanotek at bsdbox.co Thu Jan 9 13:51:24 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 00:51:24 +1100 Subject: PHP below server root not served In-Reply-To: References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> Message-ID: <52CEA95C.3020300@bsdbox.co> On 9/01/2014 11:57 PM, B.R. wrote: > Try to understand what you are doing first. > I really am trying. > One request is handled in one location. > > For this request, the one location that you want to be used is > not the > one that nginx actually uses. > > ?1. ? > location / { > ?2. ? > location ~ \.php$ { > ?3. ? > location /phpmyadmin/ { > ?4. ? > location ~ ^/phpmyadmin/(.*\.php)$ { > > > > http://nginx.org/r/location > > A request for /phpmyadmin/index.php will be handled in the > second location > above, not the fourth. > > > ?The docs say 'To find location matching a given request, nginx first > checks locations defined using the prefix strings (prefix locations). > Among them, the location with the longest matching prefix is selected > and remembered. Then regular expressions are checked, in the order of > their appearance in the configuration file. The search of regular > expressions terminates on the first match, and the corresponding > configuration is used. If no match with a regular expression is found > then the configuration of the prefix location remembered earlier is used.' > I did not (and still do not) understand it like you do. I had the impression that each location block would adhere to its own assignments. What defines a prefix string and a regular expression? Would " ~ ^/phpmyadmin/(.*\.php)$ " not be the longest matching prefix to be selected and remembered? And if so, despite it coming after the " ~ \.php$ " location, should it not be used? > Thus, in your configuration, for a '/phpmyadmin/***.php' request, it > does the following: > * Start searching prefix location * > 1. location / > 3. location /phpmyadmin/ > * End of prefix location search, longest prefix = 3. * > > * Start searching regex expressions > 2. location ~\.php$ // First regex found, stop of search > * End of prefix location search, regex found is being used * // > Otherwise, if no matching regex were to be found, a fallback to the > longest prefix location found before would have applied > I think I am gradually understanding your explanation: despite location 3 being the longest prefix, regular expression in location 2 is found and used. This found and used regex does not suit the requirements of executing PHP instructions necessary to serve files from location 4? In my example, nginx was instructing php-fpm to execute /usr/local/www/site1/wordpress$fastcgi_script_name for phpMyAdmin files? > Your problem is that the location 4. is *never* used, because the regex > being used is the first which matches. Your generic '\.php$' will > catch'em all! > ? > > ?Francis provided 2 ways of fixing your problem: > I. Re-arranging your config file so 'location /phpmyadmin/(.*\.php)$' is > found /*before*/ \.php$? > I misunderstood Francis' advice. I thought he advised nesting my /phpmyadmin location(s) inside the location ~ \.php$ block which further broke my site. > On Thu, Jan 9, 2014 at 1:41 PM, nano > wrote: > > Admittedly, I don't know *why* what I changed fixed the problem, but > it did. I relocated the phpMyAdmin entries to above the "location /" > block from beneath the "location /" block. > > ?* snip * > > The syntax is identical, just those two location blocks are in a > different place. I would like to know why this works, but am just > happy that it does. I look forward to better understanding this > great program. > > ? > You did just that... > > II. Use a smarter (and more scalable, in light of future adds to the > nginx config) way, which is nesting the rules of 'location > /phpmyadmin/(.*\.php)$' in a 'location ~\.php$' block embedded in a > 'location ^~ /phpmyadmin/' block. > Please, would you provide a working example of this for me to use? I have been trying to create this smarter way but am failing miserably. Does location ~\.php$ coming before /phpmyadmin/(.*\.php)$ (as it does if the latter is nested in the former) not emulate the same situation I created in the first place, thus rendering servername.com/phpmyadmin broken? An example would help immensely and be very much appreciated. If not mistaken, I understand you are suggesting the following structure but, due to my failures in implementing such advice, it appears I need something precise: location / { } location /phpmyadmin { location ~\.php$ location /phpmyadmin/(.*\.php)$ } } } > Note the modification made to the prefix block for phpmyadmin. The docs > say 'If the longest matching prefix location has the ?|^~|? modifier > then regular expressions are not checked.' > This way, longest prefix will be 'location /phpmyadmin/' /but the > generic 'location \.php$' *won't be used*/ since there will be no regex > search. > What exactly does this mean? Is this a proper assignment to avoid phpMyAdmin scripts being affected by the generic php regex and *only* using those assigned in the ~ ^/phpmyadmin/(.*\.php)$ location? > Hope I helped, > I really appreciate your help. I am also very sorry that I do not have a better understanding and lack the knowledge to fully benefit from your assistance, but it is great you are helping. Thank you! -- syn.bsdbox.co From nanotek at bsdbox.co Thu Jan 9 14:42:41 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 01:42:41 +1100 Subject: PHP below server root not served In-Reply-To: References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> Message-ID: <52CEB561.8050305@bsdbox.co> On 9/01/2014 11:57 PM, B.R. wrote: > > II. Use a smarter (and more scalable, in light of future adds to the > nginx config) way, which is nesting the rules of 'location > /phpmyadmin/(.*\.php)$' in a 'location ~\.php$' block embedded in a > 'location ^~ /phpmyadmin/' block. > I have attempted several variations of this format[1] you recommend and continue to produce a broken site; dialog to download application/octet-stream from the main servername.com and a 'File not found.' from https://servername.com/phpmyadmin. [1] location / { try_files $uri $uri/ /index.php?$args; } location ^~ /phpmyadmin { alias /usr/local/www/phpMyAdmin/; index index.php index.html; location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_param DOCUMENT_ROOT /usr/local/www/phpMyAdmin; fastcgi_param SCRIPT_FILENAME /usr/local/www/phpMyAdmin/$1; fastcgi_param SCRIPT_FILENAME /usr/local/www/site1/wordpress$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_script_name; include fastcgi_params; } } I eagerly anticipate a working example if and when you can provide one. Thank you. -- syn.bsdbox.co From mdounin at mdounin.ru Thu Jan 9 14:46:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 18:46:50 +0400 Subject: proxy_cache incorrectly returning 304 Not Modified In-Reply-To: <52CD674E.5080007@jonathanleighton.com> References: <52CD674E.5080007@jonathanleighton.com> Message-ID: <20140109144650.GK1835@mdounin.ru> Hello! On Wed, Jan 08, 2014 at 02:57:18PM +0000, Jon Leighton wrote: > Hi there, > > I work on a site which has nginx in front of a Rails application, and we > use proxy_cache. > > For the home page, our application returns a "max-age=600, public" > Cache-Control header, and we have nginx configured to cache the response > using proxy_cache. > > This generally works fine, but last night nginx started to respond with > 304 Not Modified to requests that *didn't* include any caching headers > (If-Modified-Since or ETag). Our Pingdom alerts showed this issue, and > here is the request/response captured: > > GET / HTTP/1.0 > User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) > Host: loco2.com > > 304 Not Modified > Cache-Control: max-age=600, public > Date: Tue, 07 Jan 2014 22:59:32 GMT > ETag: "900e1f11422519337c9ed25fad299ce0" > Server: nginx/1.4.4 > Status: 304 Not Modified > Strict-Transport-Security: max-age=31536000 > X-Cache-Status: HIT > X-Content-Type-Options: nosniff > X-Frame-Options: SAMEORIGIN > X-Request-Id: c7ee78ab-df49-4467-bced-753b2cc622ab > X-UA-Compatible: chrome=1 > X-XSS-Protection: 1; mode=block > Connection: Close > > Does this look like a bug? Or could it be a configuration issue? I can't > think of any reason why this should be the correct thing for the proxy > cache to do. This easily can be a result of a misconfiguration (e.g., proxy_set_header used incorrectly) and/or backend problem. Additionally, response headers looks very suspicious. There shouldn't be "Status: 304 Not Modified" in nginx responses, and "Connection: Close" capitalization doesn't match what nginx uses. Unless it's something introduced by Pingdom interface, this may indicate that it's checking something which isn't nginx. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 9 15:03:09 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 9 Jan 2014 19:03:09 +0400 Subject: Time out errors using uwsgi with ngnix on debian 7 (wheezy) In-Reply-To: References: Message-ID: <20140109150309.GL1835@mdounin.ru> Hello! On Wed, Jan 08, 2014 at 08:15:47PM -0500, Denis Papathanasiou wrote: > I've installed nginx via apt, using the nginx stable pkg as described here: > http://nginx.org/en/linux_packages.html#stable > > It works perfectly for serving static files using the default configuration. > > Next, I installed uwsgi from source, as described here: > https://pypi.python.org/pypi/uWSGI/1.2.3 > > When I do the steps in the python quickstart guide -- > http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html -- and open > my browser to localhost:9090, everything works as expected. > > When I change the nginx config file to use uwsgi_pass to localhost:9090 as > described here -- > http://uwsgi-docs.readthedocs.org/en/latest/WSGIquickstart.html#putting-behind-a-full-webserver-- > however, I get time out errors: > > > upstream timed out (110: Connection timed out) while reading response > header from upstream > > It is as though nginx is *not* passing those requests to the uwsgi process > (which is still running). > > Here is the content of server{ } inside the nginx config file: > > location / { > include uwsgi_params; > uwsgi_pass localhost:9090; > } > > Any ideas on what the problem might be? If you are able to connect to localhost:9090 with your browser, you are likely using native HTTP support in your uWSGI server. The "uwsgi_pass" directive assumes uwsgi protocol though, which is different. You should either reconfigure uWSGI server to work via uwsgi, or instruct nginx to talk via HTTP (i.e., use "proxy_pass" instead of "uwsgi_pass"). -- Maxim Dounin http://nginx.org/ From jim at ohlste.in Thu Jan 9 15:21:06 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 09 Jan 2014 10:21:06 -0500 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CE94E3.5090608@bsdbox.co> References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> <52CE94E3.5090608@bsdbox.co> Message-ID: <52CEBE62.4090402@ohlste.in> Hello, On 1/9/14, 7:24 AM, nano wrote: [snip] > > I share your opinion regarding nginx documentation. It is woeful. > Particularly when compared to other exemplary open source projects, such > as Postfix and FreeBSD. My inability to easily transfer my webservers to > nginx from Apache, due to (my own shortcomings compounded by) terribly > inadequate documentation, nearly made the transition impossible. Insult > was only added to injury when, after transferring some sites to the > recommended nginx, I found very little performance enhancement. > Admittedly, I am most probably not properly utilizing the application > and will only see improvements when my own abilities allow it. > Nevertheless, the documentation needs work. It is prudent to accommodate > less technically aware users. > You may not see much "performance enhancement" if your server was not heavily loaded or if it is using PHP to serve static content, such as WordPress used to do up until version 3.4 and continues to do on some sites that were upgraded from older versions to the current version. Also, if you are running a PHP daemon and a MySQL server on the same server as you run nginx, they may contribute more to server load than does nginx. Optimizing them, especially MySQL, may give you significant "performance enhancement". I mention WordPress because you link to a WordPress site in your signature. Since your domain was first registered in November and since you only have a few posts most of which are rudimentary, I am going to doubt that you don't have a lot of traffic. Alexa does not even have data on your site yet, it's so new. Plus using a self signed certificate and creating SSL links on your home page - http://bsdbox.co - give the big red page on Chrome. I have no desire to add an exception for a two month old domain. Spring for $4.99/year at https://www.cheapssls.com/domain-only.html and get a PositiveSSL certificate. The shortcomings are yours indeed. The documentation is for people who understand the concepts and is not meant to be a replacement for a "for Dummies" book. I believe that (almost) every directive is covered. If you do not understand what the directives mean, there are many ways to figure it out. In such a case, Google is your friend. Comparing nginx documentation to FreeBSD documentation is a bit unrealistic. FreeBSD documentation is written by volunteers of which there are dozens if not hundreds. The entire project is a community effort. Despite that, some is out of date. For instance, look at http://www5.us.freebsd.org/doc/handbook/svn-mirrors.html. Do you see svn0.eu.FreeBSD.org listed there, or its fingerprint? There may be other servers missing as well. I have found many other examples but that's the first that comes to mind. Anyone who wants to *volunteer* to improve the documentation should do so. I'm sure the devs would at least look at any provided patches. Of course, you can always create a community effort of your own and organize your own wiki or alternate set of documentation. Or perhaps you can apply for a job at Nginx.com to work on upgrading the documentation to your standards. The original purpose of the wiki was to serve as English documentation when there was little to none. Sure, it had a bit more hand holding, but it really has become superfluous at least in terms of providing up to date documentation, at least IMMHO. -- Jim Ohlstein From nginx-forum at nginx.us Thu Jan 9 16:28:02 2014 From: nginx-forum at nginx.us (Larry) Date: Thu, 09 Jan 2014 11:28:02 -0500 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) Message-ID: Hello, Here is my current conf server { listen 443; server_name ~^(.*)\.sub\.domain\.com$ ssl on; ssl_certificate $cookie_ident/$1.crt; ssl_certificate_key $cookie_ident/$1.key; server_tokens off; ssl_protocols TLSv1.2 TLSv1.1 TLSv1 SSLv3; ssl_prefer_server_ciphers on; ssl_session_timeout 5m; ssl_session_cache builtin:1000 shared:SSL:10m; ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:RC4-SHA; autoindex off; root /upla/http/www.domain.com; port_in_redirect off; expires 10s; #add_header Cache-Control "no-cache,no-store"; #expires max; add_header Pragma public; add_header Cache-Control "public"; location / { try_files $uri /$request_uri =404; } } I would like to be able to "load" the right cert according to the cookie set and request uri. A sort of dynamic setting. But of course, when I start nginx, it complains : SSL: error:02001002:system library:fopen:No such file or directory: Perfectly normal since $cookie_ident is empty and no subdomain has been requested. So, what is the workaround I could use to avoid creating one file per new (self-signed)certificate issued ? I cannot use only one certificate for all since I have to be able to revoke the certs with granularity. How should I make it work ? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246178,246178#msg-246178 From wmark+nginx at hurrikane.de Thu Jan 9 16:40:35 2014 From: wmark+nginx at hurrikane.de (W-Mark Kubacki) Date: Thu, 9 Jan 2014 17:40:35 +0100 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) In-Reply-To: References: Message-ID: Certificates are selected and presented by the server before the client even has the chance to send any cookies, the latter happening after the ?TLS handshake?. 2014/1/9 Larry : > Hello, > > Here is my current conf > > server { > listen 443; > > server_name ~^(.*)\.sub\.domain\.com$ > > ssl on; > ssl_certificate $cookie_ident/$1.crt; > ssl_certificate_key $cookie_ident/$1.key; > server_tokens off; > > ssl_protocols TLSv1.2 TLSv1.1 TLSv1 SSLv3; > ssl_prefer_server_ciphers on; > ssl_session_timeout 5m; > ssl_session_cache builtin:1000 shared:SSL:10m; > > ssl_ciphers > ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-RSA-RC4-SHA:ECDH-RSA-RC4-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES256-SHA:RC4-SHA; > > > autoindex off; > root /upla/http/www.domain.com; > port_in_redirect off; > expires 10s; > #add_header Cache-Control "no-cache,no-store"; > #expires max; > add_header Pragma public; > add_header Cache-Control "public"; > > location / { > > try_files $uri /$request_uri =404; > > } > > } > > I would like to be able to "load" the right cert according to the cookie set > and request uri. > > A sort of dynamic setting. > > But of course, when I start nginx, it complains : > SSL: error:02001002:system library:fopen:No such file or directory: > > Perfectly normal since $cookie_ident is empty and no subdomain has been > requested. > > So, what is the workaround I could use to avoid creating one file per new > (self-signed)certificate issued ? > > I cannot use only one certificate for all since I have to be able to revoke > the certs with granularity. > > > How should I make it work ? > > Thanks > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246178,246178#msg-246178 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From contact at jpluscplusm.com Thu Jan 9 16:45:21 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 9 Jan 2014 16:45:21 +0000 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) In-Reply-To: References: Message-ID: On 9 January 2014 16:28, Larry wrote: > I would like to be able to "load" the right cert according to the cookie set > and request uri. > A sort of dynamic setting. > So, what is the workaround I could use to avoid creating one file per new > (self-signed)certificate issued ? Your problem is that, irrespective of Nginx's feelings about using a variable in the ssl_certificate directive, what you're trying to configure is a HTTP/SSL layering violation. The information you want to use to choose the correct cert is communicated inside the HTTP request (usually people ask about using the Host header; you're asking here about cookies). But this information is not available to the SSL libraries until /after/ the SSL channel has been set up - which can't be done until a cert has been selected. It's a catch-22 situation. SNI /can/ help with this, as it transmits the host header in the clear during SSL negotiation, but client support can prove limited (browsers on XP, IIRC, don't support it). I'm not sure, but I don't believe SNI communicates enough extra information (cookies and/or request paths) for you to achieve what you want to here. The usual suggestion for this situation is either to seperate out sites, one per IP; or to look at wildcard certs or UCC/SaN certs. You've mentioned self-signed certs, which suggests you may have some control over the clients root CAs - is this the case? You could perhaps automate UCC/SaN cert issuance based on your current whitelist of unrevoked certs ... tl;dr Buy some IPv4 space and use an IP per subdomain. Jonathan From miguelmclara at gmail.com Thu Jan 9 16:46:59 2014 From: miguelmclara at gmail.com (Miguel Clara) Date: Thu, 9 Jan 2014 16:46:59 +0000 Subject: "Primary script unknown" wp-login.php In-Reply-To: <52CE8C18.9040709@bsdbox.co> References: <52CE2C47.8080905@bsdbox.co> <20140109110124.GC19804@craic.sysops.org> <52CE8C18.9040709@bsdbox.co> Message-ID: > I resolved this problem by making the /wordpress directory the server root. > However, I now have the problem of /usr/local/www/phpMyAdmin being > inaccessible, due to the same error. > You can, and its probably best to use: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; Also you should have those to in different nginx config files, its far better to read/modify if needed. In any case the .php block should work has long has the "root" is set right in the different locations! From r at roze.lv Thu Jan 9 16:52:39 2014 From: r at roze.lv (Reinis Rozitis) Date: Thu, 9 Jan 2014 18:52:39 +0200 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) In-Reply-To: References: Message-ID: > So, what is the workaround I could use to avoid creating one file per new > (self-signed)certificate issued ? > I cannot use only one certificate for all since I have to be able to > revoke the certs with granularity. If you don't want to use file/certificate per domain but the same time can't work arround it with a wildcard certificate it (imo) leaves just one option - to create a certificate including all the exact domains and whenever there are some changes (expiration or a new domain added) regenerate the cert. p.s. you can do something like that even with non self-signed certificates - for example (while manually) Godaddy lets you add or remove domains to their "Multiple Domains UCC" certs (up to 100 domains) on the fly (the expiration of the whole cert remains). rr From denis.papathanasiou at gmail.com Thu Jan 9 17:08:23 2014 From: denis.papathanasiou at gmail.com (Denis Papathanasiou) Date: Thu, 9 Jan 2014 12:08:23 -0500 Subject: Time out errors using uwsgi with ngnix on debian 7 (wheezy) In-Reply-To: <20140109150309.GL1835@mdounin.ru> References: <20140109150309.GL1835@mdounin.ru> Message-ID: Maxim, Thank you for your reply. On Thu, Jan 9, 2014 at 10:03 AM, Maxim Dounin wrote: > [snip] > > If you are able to connect to localhost:9090 with your browser, > you are likely using native HTTP support in your uWSGI server. > Yes, I am starting the uwsgi process like this, using the --http flag: uwsgi --http :9090 --wsgi-file foobar.py --master --processes 4 --threads 2 > The "uwsgi_pass" directive assumes uwsgi protocol though, which is > different. > > You should either reconfigure uWSGI server to work via uwsgi, or > Ah, I see, I should use the --socket option instead, like this: uwsgi --socket 127.0.0.1:9090 --wsgi-file foobar.py --master --processes 4 --threads 2 Thank you for clarifying that; it *is* in the uwsgi docs I quoted earlier, but it is a subtle point under the "quickstart" section, and I had missed it. > instruct nginx to talk via HTTP (i.e., use "proxy_pass" instead of > "uwsgi_pass"). > I see: I could use "proxy_pass" and keep --http when I start uwsgi. Thank you, that was very helpful! -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Thu Jan 9 17:13:01 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 09 Jan 2014 12:13:01 -0500 Subject: PHP below server root not served In-Reply-To: <52CEB561.8050305@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEB561.8050305@bsdbox.co> Message-ID: <52CED89D.9070604@ohlste.in> Hello, On 1/9/14, 9:42 AM, nano wrote: > > I have attempted several variations of this format[1] you recommend and > continue to produce a broken site; dialog to download > application/octet-stream from the main servername.com and a 'File not > found.' from https://servername.com/phpmyadmin. > > [1] > location / { > try_files $uri $uri/ /index.php?$args; > } > > location ^~ /phpmyadmin { > alias /usr/local/www/phpMyAdmin/; > index index.php index.html; > > location ~ \.php$ { > fastcgi_pass unix:/var/run/php-fpm.locatsock; > fastcgi_param DOCUMENT_ROOT /usr/local/www/phpMyAdmin; > fastcgi_param SCRIPT_FILENAME /usr/local/www/phpMyAdmin/$1; > fastcgi_param SCRIPT_FILENAME > /usr/local/www/site1/wordpress$fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_script_name; > include fastcgi_params; > } > } > > I eagerly anticipate a working example if and when you can provide one. > Thank you. > Next to "IfIsEvil" there should be a "DoNotUseAlias (unless necessary)". Use the "root" directive and nested locations location /phpMyAdmin { root /usr/local/www; index index.php; # above probably not necessary as it is inherited from above location ~ \.php$ { fastcgi_pass ...; ... } } A few notes, in no particular order: You *should* use auth_basic [0] at the very least as exposing this functionality the world is a very bad idea. You should consider using "https only" for this script. If you want to enter phpmyadmin in all lower case in the URL (it is easier), do it via rewrite. Consider turning off access log on at least rewritten requests once you know it's working. Consider using your server's FQDN, not your server name. It's less likely potential intruders would guess it, though far from impossible. Something like (not tested but should get you very close if not there): server { listen 80; server_name foo; location ^~ /phpmyadmin { access_log off; rewrite ^ /phpMyAdmin/ permanent; } location /phpMyAdmin { access_log off; rewrite ^ https://foo$request_uri? break; } ... } server { listen 443 ssl; server name foo; ssl_certificate /path/to/cert; ssl_certificate_key /path/to/key; ... location ^~ /phpmyadmin { access_log off; rewrite ^ /phpMyAdmin/ permanent; } location /phpMyAdmin { auth_basic "Blah"; auth_basic_usr_file /path/to/auth/file; # access_log off; # optional location ~ \.php$ { fastcgi_pass ...; include fastcgi_params; fastcgi_index index.php; fastcgi_param HTTPS on; } } } [0] http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html -- Jim Ohlstein From nanotek at bsdbox.co Thu Jan 9 17:14:44 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 04:14:44 +1100 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CEBE62.4090402@ohlste.in> References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> <52CE94E3.5090608@bsdbox.co> <52CEBE62.4090402@ohlste.in> Message-ID: <52CED904.8010409@bsdbox.co> On 10/01/2014 2:21 AM, Jim Ohlstein wrote: > Hello, > > On 1/9/14, 7:24 AM, nano wrote: > > [snip] > >> >> I share your opinion regarding nginx documentation. It is woeful. >> Particularly when compared to other exemplary open source projects, such >> as Postfix and FreeBSD. My inability to easily transfer my webservers to >> nginx from Apache, due to (my own shortcomings compounded by) terribly >> inadequate documentation, nearly made the transition impossible. Insult >> was only added to injury when, after transferring some sites to the >> recommended nginx, I found very little performance enhancement. >> Admittedly, I am most probably not properly utilizing the application >> and will only see improvements when my own abilities allow it. >> Nevertheless, the documentation needs work. It is prudent to accommodate >> less technically aware users. >> > > You may not see much "performance enhancement" if your server was not > heavily loaded or if it is using PHP to serve static content, such as > WordPress used to do up until version 3.4 and continues to do on some > sites that were upgraded from older versions to the current version. > Also, if you are running a PHP daemon and a MySQL server on the same > server as you run nginx, they may contribute more to server load than > does nginx. Optimizing them, especially MySQL, may give you significant > "performance enhancement". Thanks, Jim, for the suggestions. I may look into optimizing MySQL at a later date. > I mention WordPress because you link to a > WordPress site in your signature. Since your domain was first registered > in November and since you only have a few posts most of which are > rudimentary, I am going to doubt that you don't have a lot of traffic. > Alexa does not even have data on your site yet, it's so new. Plus using > a self signed certificate and creating SSL links on your home page - > http://bsdbox.co - give the big red page on Chrome. I have no desire to > add an exception for a two month old domain. Spring for $4.99/year at > https://www.cheapssls.com/domain-only.html and get a PositiveSSL > certificate. > That domain only hosts a personal blog documenting FreeBSD procedures, and SOHO resource for colleagues, family and friends; in fact, the server is still running Apache and is not relevant to my observations pertaining to increased performance, or lack of, in transferring to nginx on other sites. Further, I have no desire to satisfy your trust concerns. My concern is to secure my own sensitive traffic. Moreover, the paradigm of entrusting third parties is foolish and highly susceptible to exploitation, but this, too, is irrelevant. Thank you for your concern and advice; however, I will not be purchasing a "PositiveSSL certificate". > The shortcomings are yours indeed. The documentation is for people who > understand the concepts and is not meant to be a replacement for a "for > Dummies" book. I believe that (almost) every directive is covered. If > you do not understand what the directives mean, there are many ways to > figure it out. In such a case, Google is your friend. > I have no doubt, and iterated, my inadequacies affect my (mis)understanding of the documentation. Similarly, I remarked on the utility of alternative resources; found through Google. If you have some "for Dummies" resources, please feel free to provide them. That would be good. > Comparing nginx documentation to FreeBSD documentation is a bit > unrealistic. FreeBSD documentation is written by volunteers of which > there are dozens if not hundreds. The entire project is a community > effort. Despite that, some is out of date. For instance, look at > http://www5.us.freebsd.org/doc/handbook/svn-mirrors.html. Do you see > svn0.eu.FreeBSD.org listed there, or its fingerprint? There may be other > servers missing as well. I have found many other examples but that's the > first that comes to mind. > It is analogous, as is the comparison to Postfix documentation. I did not claim FreeBSD literature is absent error, but that it is simply more comprehensive and accommodates "Dummies". If nginx chooses to cater for "for people who understand the concepts and is not meant to be a replacement for a 'for Dummies' book", that is the prerogative of the maintainers and developers of nginx documentation. > Anyone who wants to *volunteer* to improve the documentation should do > so. I'm sure the devs would at least look at any provided patches. > > Of course, you can always create a community effort of your own and > organize your own wiki or alternate set of documentation. Or perhaps you > can apply for a job at Nginx.com to work on upgrading the documentation > to your standards. > I am certain there are people who value and appreciate the project enough that will choose to contribute. When the values and objectives of a project comport with my own, I often choose to contribute how I can; such as, deploying Tor exit nodes, documenting up-to-date, basic procedures, or making monetary donations to the FreeBSD Foundation. This is a nice quality of open source communities. The good ones thrive, the less valued do not. > The original purpose of the wiki was to serve as English documentation > when there was little to none. I am sure that multimillion dollar donations will contribute to further improvements. > Sure, it had a bit more hand holding, but > it really has become superfluous at least in terms of providing up to > date documentation, at least IMMHO. > > You are entitled to your opinion, as am I. Your advice will be considered. Thank you, Jim. -- syn.bsdbox.co From nanotek at bsdbox.co Thu Jan 9 17:28:58 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 04:28:58 +1100 Subject: PHP below server root not served In-Reply-To: <52CED89D.9070604@ohlste.in> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEB561.8050305@bsdbox.co> <52CED89D.9070604@ohlste.in> Message-ID: <52CEDC5A.6090203@bsdbox.co> On 10/01/2014 4:13 AM, Jim Ohlstein wrote: > Hello, > > On 1/9/14, 9:42 AM, nano wrote: >> >> I have attempted several variations of this format[1] you recommend and >> continue to produce a broken site; dialog to download >> application/octet-stream from the main servername.com and a 'File not >> found.' from https://servername.com/phpmyadmin. >> >> [1] >> location / { >> try_files $uri $uri/ /index.php?$args; >> } >> >> location ^~ /phpmyadmin { >> alias /usr/local/www/phpMyAdmin/; >> index index.php index.html; >> >> location ~ \.php$ { >> fastcgi_pass unix:/var/run/php-fpm.locatsock; >> fastcgi_param DOCUMENT_ROOT /usr/local/www/phpMyAdmin; >> fastcgi_param SCRIPT_FILENAME /usr/local/www/phpMyAdmin/$1; >> fastcgi_param SCRIPT_FILENAME >> /usr/local/www/site1/wordpress$fastcgi_script_name; >> fastcgi_param PATH_INFO $fastcgi_script_name; >> include fastcgi_params; >> } >> } >> >> I eagerly anticipate a working example if and when you can provide one. >> Thank you. >> > > Next to "IfIsEvil" there should be a "DoNotUseAlias (unless necessary)". > Use the "root" directive and nested locations > > location /phpMyAdmin { > root /usr/local/www; > index index.php; > # above probably not necessary as it is inherited from above > location ~ \.php$ { > fastcgi_pass ...; > ... > } > } > > If my recollection is correct, I believe I had problems when using root instead of alias directive. I will try again though. > A few notes, in no particular order: > > You *should* use auth_basic [0] at the very least as exposing this > functionality the world is a very bad idea. > > You should consider using "https only" for this script. > > If you want to enter phpmyadmin in all lower case in the URL (it is > easier), do it via rewrite. > > Consider turning off access log on at least rewritten requests once you > know it's working. > > Consider using your server's FQDN, not your server name. It's less > likely potential intruders would guess it, though far from impossible. > > Something like (not tested but should get you very close if not there): > > server { > listen 80; > server_name foo; > > location ^~ /phpmyadmin { > access_log off; > rewrite ^ /phpMyAdmin/ permanent; > } > > location /phpMyAdmin { > access_log off; > rewrite ^ https://foo$request_uri? break; > } > ... > > } > > server { > listen 443 ssl; > server name foo; > > ssl_certificate /path/to/cert; > ssl_certificate_key /path/to/key; > > ... > > location ^~ /phpmyadmin { > access_log off; > rewrite ^ /phpMyAdmin/ permanent; > } > > location /phpMyAdmin { > auth_basic "Blah"; > auth_basic_usr_file /path/to/auth/file; > # access_log off; # optional > location ~ \.php$ { > fastcgi_pass ...; > include fastcgi_params; > fastcgi_index index.php; > fastcgi_param HTTPS on; > } > } > } > I would like the whole server accessible over SSL. Not just for phpMyAdmin but WordPress administration. > > [0] http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html > Jim, thank you very much for your example(s) and advice, it is very much appreciated. I had intended to secure phpMyAdmin access after resolving my basic configuration issues. I will attempt to implement these changes and report back with results. -- syn.bsdbox.co From agentzh at gmail.com Thu Jan 9 17:33:03 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 9 Jan 2014 09:33:03 -0800 Subject: Nginx, Lua and blocking libraries In-Reply-To: <52CEA595.2040505@digirati.com.br> References: <52CEA595.2040505@digirati.com.br> Message-ID: Hello! On Thu, Jan 9, 2014 at 5:35 AM, Andre Nathan wrote: > However, as known, > using the lua-sqlite3 library directly is not optimal because it would > block the Nginx worker process. > Well, I suggest you benchmark the actual performance and measure the actual blocking effect (We actually have systemtap-based tools to measure the epoll loop blocking effect: https://github.com/agentzh/stapxx#epoll-loop-blocking-distr and the off-CPU flamegraph tool is very useful for determining the contributors: https://github.com/agentzh/nginx-systemtap-toolkit#sample-bt-off-cpu ). Basically if your database is in-memory, then the blocking effect should be much smaller because no disk IO involved. Even if you're using on-disk database, the blocking effect should be quite small if your disks are fast (like modern SSD cards) and/or the kernel's page cache's hit rate is high enough. Nginx core's popular in-file http cache (used by proxy_cache, fastcgi_cache, and etc) also involves blocking file IO system calls and people are still enjoying it a lot ;) It's worth noting that when you're using ngx_lua to embed a database like SQLite, you should always cache the file descriptor to save `open` and `close` system calls. > My question is, is there a way to work around this in any way? Ideally you should use (or write) a pthread based network service frontend (be it TCP or just unix domain sockets) for SQLite and let your Nginx talk to this network interface in a 100% nonblocking way. > For > example, creating a coroutine to run the lua-sqlite3 calls? > Basically Lua coroutines cannot work around blocking system calls. You need real OS threads for that. Regards, -agentzh From jim at ohlste.in Thu Jan 9 17:33:10 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Thu, 09 Jan 2014 12:33:10 -0500 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CED904.8010409@bsdbox.co> References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> <52CE94E3.5090608@bsdbox.co> <52CEBE62.4090402@ohlste.in> <52CED904.8010409@bsdbox.co> Message-ID: <52CEDD56.9070109@ohlste.in> Hello, On 1/9/14, 12:14 PM, nano wrote: > On 10/01/2014 2:21 AM, Jim Ohlstein wrote: >> Hello, >> >> On 1/9/14, 7:24 AM, nano wrote: >> >> [snip] >> >>> >>> I share your opinion regarding nginx documentation. It is woeful. >>> Particularly when compared to other exemplary open source projects, such >>> as Postfix and FreeBSD. My inability to easily transfer my webservers to >>> nginx from Apache, due to (my own shortcomings compounded by) terribly >>> inadequate documentation, nearly made the transition impossible. Insult >>> was only added to injury when, after transferring some sites to the >>> recommended nginx, I found very little performance enhancement. >>> Admittedly, I am most probably not properly utilizing the application >>> and will only see improvements when my own abilities allow it. >>> Nevertheless, the documentation needs work. It is prudent to accommodate >>> less technically aware users. >>> >> >> You may not see much "performance enhancement" if your server was not >> heavily loaded or if it is using PHP to serve static content, such as >> WordPress used to do up until version 3.4 and continues to do on some >> sites that were upgraded from older versions to the current version. >> Also, if you are running a PHP daemon and a MySQL server on the same >> server as you run nginx, they may contribute more to server load than >> does nginx. Optimizing them, especially MySQL, may give you significant >> "performance enhancement". > > Thanks, Jim, for the suggestions. I may look into optimizing MySQL at a > later date. Going to copy someone else's procedures and write another "tutorial"? > >> I mention WordPress because you link to a >> WordPress site in your signature. Since your domain was first registered >> in November and since you only have a few posts most of which are >> rudimentary, I am going to doubt that you don't have a lot of traffic. >> Alexa does not even have data on your site yet, it's so new. Plus using >> a self signed certificate and creating SSL links on your home page - >> http://bsdbox.co - give the big red page on Chrome. I have no desire to >> add an exception for a two month old domain. Spring for $4.99/year at >> https://www.cheapssls.com/domain-only.html and get a PositiveSSL >> certificate. >> > > That domain only hosts a personal blog documenting FreeBSD procedures, > and SOHO resource for colleagues, family and friends; in fact, the > server is still running Apache and is not relevant to my observations > pertaining to increased performance, or lack of, in transferring to > nginx on other sites. Further, I have no desire to satisfy your trust > concerns. My concern is to secure my own sensitive traffic. Moreover, > the paradigm of entrusting third parties is foolish and highly > susceptible to exploitation, but this, too, is irrelevant. Thank you for > your concern and advice; however, I will not be purchasing a > "PositiveSSL certificate". Whatever. You put a link in your signature and *very* rudimentary (and somewhat incorrect) "tutorials" in your blog. In fact, on December 20, 2013 you wrote: "I recently decided to build my first FreeBSD box. First order of business: roll my own Apache server to host my ownCloud service. I also decided to stand up this WordPress site to document my progress. Mostly for posterity?s sake; this way, I have tried-and-tested data to reference during future UNIX operations. ?Why should I [?]" Learn something about being a sysadmin before writing "tutorials". Anyway, opinions are like assholes. Everybody has one. Yours just happens to be wrong. > >> The shortcomings are yours indeed. The documentation is for people who >> understand the concepts and is not meant to be a replacement for a "for >> Dummies" book. I believe that (almost) every directive is covered. If >> you do not understand what the directives mean, there are many ways to >> figure it out. In such a case, Google is your friend. >> > > I have no doubt, and iterated, my inadequacies affect my > (mis)understanding of the documentation. Similarly, I remarked on the > utility of alternative resources; found through Google. If you have some > "for Dummies" resources, please feel free to provide them. That would be > good. > >> Comparing nginx documentation to FreeBSD documentation is a bit >> unrealistic. FreeBSD documentation is written by volunteers of which >> there are dozens if not hundreds. The entire project is a community >> effort. Despite that, some is out of date. For instance, look at >> http://www5.us.freebsd.org/doc/handbook/svn-mirrors.html. Do you see >> svn0.eu.FreeBSD.org listed there, or its fingerprint? There may be other >> servers missing as well. I have found many other examples but that's the >> first that comes to mind. >> > > It is analogous, as is the comparison to Postfix documentation. I did > not claim FreeBSD literature is absent error, but that it is simply more > comprehensive and accommodates "Dummies". If nginx chooses to cater for > "for people who understand the concepts and is not meant to be a > replacement for a 'for Dummies' book", that is the prerogative of the > maintainers and developers of nginx documentation. See above. > >> Anyone who wants to *volunteer* to improve the documentation should do >> so. I'm sure the devs would at least look at any provided patches. >> >> Of course, you can always create a community effort of your own and >> organize your own wiki or alternate set of documentation. Or perhaps you >> can apply for a job at Nginx.com to work on upgrading the documentation >> to your standards. >> > > I am certain there are people who value and appreciate the project > enough that will choose to contribute. When the values and objectives of > a project comport with my own, I often choose to contribute how I can; > such as, deploying Tor exit nodes, documenting up-to-date, basic > procedures, or making monetary donations to the FreeBSD Foundation. This > is a nice quality of open source communities. The good ones thrive, the > less valued do not. > >> The original purpose of the wiki was to serve as English documentation >> when there was little to none. > > I am sure that multimillion dollar donations will contribute to further > improvements. I'm not aware of any "multimillion dollar donations" but maybe you are. Commercial funding is not a "donation". > >> Sure, it had a bit more hand holding, but >> it really has become superfluous at least in terms of providing up to >> date documentation, at least IMMHO. >> >> > > You are entitled to your opinion, as am I. Your advice will be > considered. Thank you, Jim. Again, see above. > Peace out. -- Jim Ohlstein From nanotek at bsdbox.co Thu Jan 9 17:46:58 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 04:46:58 +1100 Subject: Nginx as reverse Proxy, remove X-Frame-Options header In-Reply-To: <52CEDD56.9070109@ohlste.in> References: <52CE73DF.80200@arcor.de> <20140109115732.GB1835@mdounin.ru> <52CE94E3.5090608@bsdbox.co> <52CEBE62.4090402@ohlste.in> <52CED904.8010409@bsdbox.co> <52CEDD56.9070109@ohlste.in> Message-ID: <52CEE092.3000203@bsdbox.co> On 10/01/2014 4:33 AM, Jim Ohlstein wrote: > Hello, > > On 1/9/14, 12:14 PM, nano wrote: >> On 10/01/2014 2:21 AM, Jim Ohlstein wrote: >>> Hello, >>> >>> On 1/9/14, 7:24 AM, nano wrote: >>> >>> [snip] >>> >>>> >>>> I share your opinion regarding nginx documentation. It is woeful. >>>> Particularly when compared to other exemplary open source projects, >>>> such >>>> as Postfix and FreeBSD. My inability to easily transfer my >>>> webservers to >>>> nginx from Apache, due to (my own shortcomings compounded by) terribly >>>> inadequate documentation, nearly made the transition impossible. Insult >>>> was only added to injury when, after transferring some sites to the >>>> recommended nginx, I found very little performance enhancement. >>>> Admittedly, I am most probably not properly utilizing the application >>>> and will only see improvements when my own abilities allow it. >>>> Nevertheless, the documentation needs work. It is prudent to >>>> accommodate >>>> less technically aware users. >>>> >>> >>> You may not see much "performance enhancement" if your server was not >>> heavily loaded or if it is using PHP to serve static content, such as >>> WordPress used to do up until version 3.4 and continues to do on some >>> sites that were upgraded from older versions to the current version. >>> Also, if you are running a PHP daemon and a MySQL server on the same >>> server as you run nginx, they may contribute more to server load than >>> does nginx. Optimizing them, especially MySQL, may give you significant >>> "performance enhancement". >> >> Thanks, Jim, for the suggestions. I may look into optimizing MySQL at a >> later date. > > Going to copy someone else's procedures and write another "tutorial"? > I will record the procedure that results in a successful mission. That typically involves documenting steps taken from a variety of sources, as finding one that works without requiring changes is not commonplace. If you have any resources, please feel free to provide them. >> >>> I mention WordPress because you link to a >>> WordPress site in your signature. Since your domain was first registered >>> in November and since you only have a few posts most of which are >>> rudimentary, I am going to doubt that you don't have a lot of traffic. >>> Alexa does not even have data on your site yet, it's so new. Plus using >>> a self signed certificate and creating SSL links on your home page - >>> http://bsdbox.co - give the big red page on Chrome. I have no desire to >>> add an exception for a two month old domain. Spring for $4.99/year at >>> https://www.cheapssls.com/domain-only.html and get a PositiveSSL >>> certificate. >>> >> >> That domain only hosts a personal blog documenting FreeBSD procedures, >> and SOHO resource for colleagues, family and friends; in fact, the >> server is still running Apache and is not relevant to my observations >> pertaining to increased performance, or lack of, in transferring to >> nginx on other sites. Further, I have no desire to satisfy your trust >> concerns. My concern is to secure my own sensitive traffic. Moreover, >> the paradigm of entrusting third parties is foolish and highly >> susceptible to exploitation, but this, too, is irrelevant. Thank you for >> your concern and advice; however, I will not be purchasing a >> "PositiveSSL certificate". > > Whatever. You put a link in your signature and *very* rudimentary (and > somewhat incorrect) "tutorials" in your blog. > Please, feel free to highlight what is incorrect, Jim. I would be happy to make corrections. > In fact, on December 20, 2013 you wrote: > > "I recently decided to build my first FreeBSD box. First order of > business: roll my own Apache server to host my ownCloud service. I also > decided to stand up this WordPress site to document my progress. Mostly > for posterity?s sake; this way, I have tried-and-tested data to > reference during future UNIX operations. ?Why should I [?]" > As I said in the paragraph you quote above: "personal blog documenting FreeBSD procedures." I find it useful to record my progress. If it helps somebody else, that is good. > Learn something about being a sysadmin before writing "tutorials". > I hope to continue learning. Please, feel free to contribute in any way you like. > Anyway, opinions are like assholes. Everybody has one. Yours just > happens to be wrong. > If that is your opinion. Good for you. Like you say, "everybody has one." >> >>> The shortcomings are yours indeed. The documentation is for people who >>> understand the concepts and is not meant to be a replacement for a "for >>> Dummies" book. I believe that (almost) every directive is covered. If >>> you do not understand what the directives mean, there are many ways to >>> figure it out. In such a case, Google is your friend. >>> >> >> I have no doubt, and iterated, my inadequacies affect my >> (mis)understanding of the documentation. Similarly, I remarked on the >> utility of alternative resources; found through Google. If you have some >> "for Dummies" resources, please feel free to provide them. That would be >> good. >> >>> Comparing nginx documentation to FreeBSD documentation is a bit >>> unrealistic. FreeBSD documentation is written by volunteers of which >>> there are dozens if not hundreds. The entire project is a community >>> effort. Despite that, some is out of date. For instance, look at >>> http://www5.us.freebsd.org/doc/handbook/svn-mirrors.html. Do you see >>> svn0.eu.FreeBSD.org listed there, or its fingerprint? There may be other >>> servers missing as well. I have found many other examples but that's the >>> first that comes to mind. >>> >> >> It is analogous, as is the comparison to Postfix documentation. I did >> not claim FreeBSD literature is absent error, but that it is simply more >> comprehensive and accommodates "Dummies". If nginx chooses to cater for >> "for people who understand the concepts and is not meant to be a >> replacement for a 'for Dummies' book", that is the prerogative of the >> maintainers and developers of nginx documentation. > > See above. > >> >>> Anyone who wants to *volunteer* to improve the documentation should do >>> so. I'm sure the devs would at least look at any provided patches. >>> >>> Of course, you can always create a community effort of your own and >>> organize your own wiki or alternate set of documentation. Or perhaps you >>> can apply for a job at Nginx.com to work on upgrading the documentation >>> to your standards. >>> >> >> I am certain there are people who value and appreciate the project >> enough that will choose to contribute. When the values and objectives of >> a project comport with my own, I often choose to contribute how I can; >> such as, deploying Tor exit nodes, documenting up-to-date, basic >> procedures, or making monetary donations to the FreeBSD Foundation. This >> is a nice quality of open source communities. The good ones thrive, the >> less valued do not. >> >>> The original purpose of the wiki was to serve as English documentation >>> when there was little to none. >> >> I am sure that multimillion dollar donations will contribute to further >> improvements. > > I'm not aware of any "multimillion dollar donations" but maybe you are. > Commercial funding is not a "donation". > Then, that multimillion dollar funding will surely help. >> >>> Sure, it had a bit more hand holding, but >>> it really has become superfluous at least in terms of providing up to >>> date documentation, at least IMMHO. >>> >>> >> >> You are entitled to your opinion, as am I. Your advice will be >> considered. Thank you, Jim. > > Again, see above. > >> > > Peace out. > Likewise. -- syn.bsdbox.co From andre at digirati.com.br Thu Jan 9 18:22:09 2014 From: andre at digirati.com.br (Andre Nathan) Date: Thu, 09 Jan 2014 16:22:09 -0200 Subject: Nginx, Lua and blocking libraries In-Reply-To: References: <52CEA595.2040505@digirati.com.br> Message-ID: <52CEE8D1.2000207@digirati.com.br> Thanks a lot for the detailed answer, Yichun! I'll try to benchmark it, estimate the db size, see if it fits in memory, etc. Cheers, Andre -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 555 bytes Desc: OpenPGP digital signature URL: From nginx-forum at nginx.us Thu Jan 9 19:00:24 2014 From: nginx-forum at nginx.us (Larry) Date: Thu, 09 Jan 2014 14:00:24 -0500 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) In-Reply-To: References: Message-ID: <54842c6b3225d474149fa0785a725e3a.NginxMailingListEnglish@forum.nginx.org> Thanks, I left the cookies out of this context right now I understand. But since there is a http request first why doesn't nginx is able to switch to the right certificate accordingly ? Without obliging me to create a new entry for each (which is the route I am going to take)? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246178,246197#msg-246197 From chigga101 at gmail.com Thu Jan 9 19:03:49 2014 From: chigga101 at gmail.com (Matthew Ngaha) Date: Thu, 9 Jan 2014 19:03:49 +0000 Subject: config issue Message-ID: Im trying to set up nginx with django. This is the instruction given: Symlink to this file from /etc/nginx/sites-enabled so nginx can see it: sudo ln -s ~/path/to/your/mysite/mysite_nginx.conf /etc/nginx/sites-enabled/ The problem is this folder doesn't exist: /etc/nginx/sites-enabled/ my only nginx folder is in "/usr/local/nginx" so i don't know what to do (I'm on ubuntu btw.) I've tried including the file it wanted me to symlink, into my nginx.conf via an "Include" statement but everytime i reload nginx, this Include action is causing an error. What can i do to make nginx see my django project's config file? From francis at daoine.org Thu Jan 9 19:29:53 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Jan 2014 19:29:53 +0000 Subject: config issue In-Reply-To: References: Message-ID: <20140109192953.GD19804@craic.sysops.org> On Thu, Jan 09, 2014 at 07:03:49PM +0000, Matthew Ngaha wrote: Hi there, > Im trying to set up nginx with django. This is the instruction given: > > Symlink to this file from /etc/nginx/sites-enabled so nginx can see it: That instruction assumes that the nginx config file that is being used already has something like "include /etc/nginx/sites-enabled/*conf", so that every matching file will be processed. Apparently that's not true in this case. > my only nginx folder is in "/usr/local/nginx" so i don't know what to > do (I'm on ubuntu btw.) I've tried including the file it wanted me to > symlink, into my nginx.conf via an "Include" statement but everytime i > reload nginx, this Include action is causing an error. "include" is the correct directive to use. Documentation at http://nginx.org/r/include, in case anything is unclear. Hopefully the words of the error will be enough to let you find how to overcome it. f -- Francis Daly francis at daoine.org From appa at perusio.net Thu Jan 9 19:50:24 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Thu, 9 Jan 2014 20:50:24 +0100 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) In-Reply-To: <54842c6b3225d474149fa0785a725e3a.NginxMailingListEnglish@forum.nginx.org> References: <54842c6b3225d474149fa0785a725e3a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Because the certs are parsed when the config is loaded so that you can have a SSL context right from the start. Well before the HTTP layer is touched. If you want dynamic cert loading you have to do it yourself. At a time I tried that by following a simpler path of modifying stud so that it does on the fly cert loading. Never pursued it further. The thing is you need a context right from the start and then change dynamically to use the server name, AFAICT. Le 9 janv. 2014 20:00, "Larry" a ?crit : > > Thanks, > > I left the cookies out of this context right now I understand. > > But since there is a http request first why doesn't nginx is able to switch > to the right certificate accordingly ? > > Without obliging me to create a new entry for each (which is the route I am > going to take)? > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246178,246197#msg-246197 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 9 20:00:35 2014 From: nginx-forum at nginx.us (Larry) Date: Thu, 09 Jan 2014 15:00:35 -0500 Subject: Dynamic ssl certificate ? (wildcard+ multiple different certs) In-Reply-To: References: Message-ID: <8d093685a8f3870862385c021b34dc31.NginxMailingListEnglish@forum.nginx.org> Thanks, I changed my strategy : one file programmatically modified and added to the site-enabled folder like that everything runs fine and I keep being able to meet my requirement of one root ca per client. Many thanks all of you Bye Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246178,246205#msg-246205 From francis at daoine.org Thu Jan 9 20:58:29 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Jan 2014 20:58:29 +0000 Subject: PHP below server root not served In-Reply-To: <52CEA95C.3020300@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> Message-ID: <20140109205829.GE19804@craic.sysops.org> On Fri, Jan 10, 2014 at 12:51:24AM +1100, nano wrote: > On 9/01/2014 11:57 PM, B.R. wrote: > >On Thu, Jan 9, 2014 at 1:41 PM, nano wrote: Hi there, The nginx config follows its own logic, which may not match your previous experiences. When you understand that, you'll have a much better chance of knowing the configuration you are looking for. One important feature is that one request is handled in one location. Another is that there are very specific rules on how the one location is selected. You are likely to find it frustrating to guess at possible configurations, until you can determine for yourself which one location will be used for a request. The documentation for this is at http://nginx.org/r/location, with a worked example at http://nginx.org/en/docs/http/request_processing.html#simple_php_site_configuration ("How nginx processes a request" on http://nginx.org/en/docs/) > I did not (and still do not) understand it like you do. I had the > impression that each location block would adhere to its own assignments. I'm not sure what that means. nginx must choose which location block to use. The config in, or inherited into, that block is the only config that applies for this request. > What defines a prefix string and a regular expression? Documentation, paragraph 3, sentence 2. (Or see below.) > Would " ~ > ^/phpmyadmin/(.*\.php)$ " not be the longest matching prefix to be > selected and remembered? No. If it starts "~", it's a regular expression. > I think I am gradually understanding your explanation: despite location > 3 being the longest prefix, regular expression in location 2 is found > and used. Yes, but I'd say "because" rather than "despite". > This found and used regex does not suit the requirements of > executing PHP instructions necessary to serve files from location 4? I'm not sure what that means. The location chosen is the only one used to handle this request. If the config isn't right, the config isn't right. > In my example, nginx was instructing php-fpm to execute > /usr/local/www/site1/wordpress$fastcgi_script_name for phpMyAdmin files? ...for any urls that matched the regular expression \.php$, yes. > I misunderstood Francis' advice. I thought he advised nesting my > /phpmyadmin location(s) inside the location ~ \.php$ block which further > broke my site. What I intended was: * option 1, preferred but more changes: regex location inside prefix ^~ location. * option 2, fewer changes so quicker to do: swap the order of the two regex locations. > > The syntax is identical, just those two location blocks are in a > > different place. I would like to know why this works, but am just > > happy that it does. I look forward to better understanding this > > great program. Swapping the location blocks was enough to have the block you wanted, be the one chosen for this request, because the first regex block that matched was the one you wanted to be used. The other changes were then presumably enough for the fastcgi server to know which file to process. > >II. Use a smarter (and more scalable, in light of future adds to the > >nginx config) way, which is nesting the rules of 'location > >/phpmyadmin/(.*\.php)$' in a 'location ~\.php$' block embedded in a > >'location ^~ /phpmyadmin/' block. > > > > Please, would you provide a working example of this for me to use? I > have been trying to create this smarter way but am failing miserably. I don't have a phpmyadmin install to hand here to test against, but will be surprised to learn that there is no "here is how you install on nginx" on the phpmyadmin site, or no "here is how you configure phpmyadmin" on the nginx web site -- it doesn't seem like an especially unusual thing to want to do, and hopefully someone who has done it has advertised what they did. > Does location ~\.php$ coming before /phpmyadmin/(.*\.php)$ (as it does > if the latter is nested in the former) not emulate the same situation I > created in the first place, thus rendering servername.com/phpmyadmin > broken? No -- "nested" is different from "ordered". When nginx chooses the one location to use for a request, it chooses the one top-level location{} block to use. Within that block, if there are further choices to make, they are independent. > An example would help immensely and be very much appreciated. If > not mistaken, I understand you are suggesting the following structure > but, due to my failures in implementing such advice, it appears I need > something precise: The suggestion is along the lines of: location ^~ /phpmyadmin/ { location ~ \.php$ { # config for php scripts to be fastcgi_pass'd elsewhere } # config for static files to be served directly } and then whatever other top-level location{} blocks that you want for the rest of the server config -- possibly include a "location /" and a "location ~ \.php$". Overall, I find it helpful to think "what request am I making?", and then "which location block will be used to handle it?". Followed by "will that do what I want it to do?". And have a small test system that you can easily change things on and check. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-list at puzzled.xs4all.nl Thu Jan 9 21:42:55 2014 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Thu, 09 Jan 2014 22:42:55 +0100 Subject: One link/area on a https site with a different SSL config? Message-ID: <52CF17DF.6020601@puzzled.xs4all.nl> Hi, On a Wordpress website that works with a basic StartSSL certificate I wonder if it is possible to configure nginx (1.4.4) to use a separate self-signed cert with client certificate authentication for wp-login.php and any link in wp-admin/ ? So the regular https://blog.example.org/[some/link] uses the StartSSL cert for the https session But the https://blog.example.org/wp-login.php and https://blog.example.org/wp-admin/* use a self-signed certficate with client certificate authentication for the https session Is that possible? If yes, any keywords or what to read up on are much appreciated. Thanks, Patrick From francis at daoine.org Thu Jan 9 21:45:02 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 9 Jan 2014 21:45:02 +0000 Subject: How to combine try_files with multiple proxy destinations In-Reply-To: <552e6b2529a268dc7b6ac5138e88128f.NginxMailingListEnglish@forum.nginx.org> References: <552e6b2529a268dc7b6ac5138e88128f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140109214502.GF19804@craic.sysops.org> On Wed, Jan 08, 2014 at 04:22:40AM -0500, rrrrcccc wrote: Hi there, (This is all untested, so handle with care.) > I have the folllowing requirement: > 1. if /usr/share/nginx/html/maintenance.html exists, then always show this > file to browser. That is probably best done with an "if" and a "return 503" -- there are a few approaches you can take, with their own advantages and disadvantages. > 2. if this is the static file which located in the sub-directories of > /usr/share/nginx/html/, then show this static file to browser. > 3. if the URI begins with /testapp1/, then proxy to http://127.0.0.1:8080; > else proxy to http://127.0.0.1:8081 You could use "error_page" for 404 here, following http://ngnix.org/r/error_page. But I'll suggest using try_files with a named location as fallback. Either way, you'll probably want one prefix and one named location, per set of urls. Something like: location ^~ / { try_files $uri $uri/ @proxyslash; } location @proxyslash { proxy_pass http://127.0.0.1:8080; } location ^~ /testapp1/ { try_files $uri $uri/ @proxytestapp1; } location @proxytestapp1 { proxy_pass http://127.0.0.1:8081; } f -- Francis Daly francis at daoine.org From semenukha at gmail.com Thu Jan 9 21:48:45 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Thu, 09 Jan 2014 16:48:45 -0500 Subject: One link/area on a https site with a different SSL config? In-Reply-To: <52CF17DF.6020601@puzzled.xs4all.nl> References: <52CF17DF.6020601@puzzled.xs4all.nl> Message-ID: <1414428.j2NbxgsQF2@tornado> Patrick, It's not possible, because SSL works on lower level (session layer) than HTTP (application layer). On Thursday, January 09, 2014 10:42:55 PM Patrick Lists wrote: > Hi, > > On a Wordpress website that works with a basic StartSSL certificate I > wonder if it is possible to configure nginx (1.4.4) to use a separate > self-signed cert with client certificate authentication for wp-login.php > and any link in wp-admin/ ? > > So the regular https://blog.example.org/[some/link] uses the StartSSL > cert for the https session > > But the https://blog.example.org/wp-login.php and > https://blog.example.org/wp-admin/* use a self-signed certficate with > client certificate authentication for the https session > > Is that possible? If yes, any keywords or what to read up on are much > appreciated. > > Thanks, > Patrick > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From lists at ruby-forum.com Fri Jan 10 00:19:11 2014 From: lists at ruby-forum.com (Jack D.) Date: Fri, 10 Jan 2014 01:19:11 +0100 Subject: Customized error pages for 500. In-Reply-To: <20100728064509.GA83180@rambler-co.ru> References: <043e89a180df2cb59beddbedda0c4a53@ruby-forum.com> <20100728064509.GA83180@rambler-co.ru> Message-ID: <526fe89a66eddf6a1bc4ab78e2dddbf2@ruby-forum.com> Igor Sysoev wrote in post #928721: > error_page 500 502 503 504 /500.html; > [snip] > > location = /500.html { > } > } I would like to clarify what's going on in these two lines of Igor's answer. A. Note the "/500.html" part on the first line. That is not a variable of any sort, but rather means that Nginx is expecting there to be a file named 500.html in your root. In some examples you will see "/50x.html" used instead. In that case, there must be a file that is literally named "50x.html" in your root. (The "x" is not to be mistaken for a variable) B. Note the "location = /500.html" block down below. There is nothing in this block, but it is crucial that this block exists or your custom error page will not be displayed. -- Posted via http://www.ruby-forum.com/. From nanotek at bsdbox.co Fri Jan 10 03:07:34 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 14:07:34 +1100 Subject: PHP below server root not served In-Reply-To: <20140109205829.GE19804@craic.sysops.org> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> Message-ID: <52CF63F6.2070901@bsdbox.co> On 10/01/2014 7:58 AM, Francis Daly wrote: > Hi there, > > The nginx config follows its own logic, which may not match your previous > experiences. When you understand that, you'll have a much better chance > of knowing the configuration you are looking for. > I think this is very true in my case. I will continue reading through the documentation and implementing different configurations to better my understanding of nginx. It was my mistake assuming Apache logic[0] would be used. > One important feature is that one request is handled in one location. > > Another is that there are very specific rules on how the one location > is selected. You are likely to find it frustrating to guess at possible > configurations, until you can determine for yourself which one location > will be used for a request. > > The documentation for this is at [...} > Thank you, Francis. >> I had the impression that each location block would adhere to its own assignments. > > I'm not sure what that means. nginx must choose which location block > to use. The config in, or inherited into, that block is the only config > that applies for this request. > I mean to say, I thought each location block would use the config that is inside it, notwithstanding instructions contained inside previous location blocks. Like an Apache directive, for example. >> What defines a prefix string and a regular expression? > > Documentation, paragraph 3, sentence 2. (Or see below.) > >> Would " ~ >> ^/phpmyadmin/(.*\.php)$ " not be the longest matching prefix to be >> selected and remembered? > > No. If it starts "~", it's a regular expression. > That makes things clearer. >> I think I am gradually understanding your explanation: despite location >> 3 being the longest prefix, regular expression in location 2 is found >> and used. > > Yes, but I'd say "because" rather than "despite". > >> This found and used regex does not suit the requirements of >> executing PHP instructions necessary to serve files from location 4? > > I'm not sure what that means. The location chosen is the only one used > to handle this request. If the config isn't right, the config isn't right. > I mean to say, even though location 4 contains its own config, the config from location 2 is globally used, thus rendering location 4 requests broken. >> In my example, nginx was instructing php-fpm to execute >> /usr/local/www/site1/wordpress$fastcgi_script_name for phpMyAdmin files? > > ...for any urls that matched the regular expression \.php$, yes. > More clarification. Thank you, Francis. >> I misunderstood Francis' advice. I thought he advised nesting my >> /phpmyadmin location(s) inside the location ~ \.php$ block which further >> broke my site. > > What I intended was: > > * option 1, preferred but more changes: regex location inside prefix > ^~ location. > > * option 2, fewer changes so quicker to do: swap the order of the two > regex locations. > > >>> The syntax is identical, just those two location blocks are in a >>> different place. > > Swapping the location blocks was enough to have the block you wanted, > be the one chosen for this request, because the first regex block that > matched was the one you wanted to be used. The other changes were then > presumably enough for the fastcgi server to know which file to process. > I still find this confusing: why has this move not broken the generic location ~\.php$ block, which now comes after the location /phpmyadmin/(.*\.php)$ block, rendering root requests (sitename.com) broken? The inverse breaks the /phpmyadmin/(.*\.php)$ config. >>> II. Use a smarter (and more scalable, in light of future adds to the >>> nginx config) way, which is nesting the rules of 'location >>> /phpmyadmin/(.*\.php)$' in a 'location ~\.php$' block embedded in a >>> 'location ^~ /phpmyadmin/' block. >>> >> >> Please, would you provide a working example of this for me to use? I >> have been trying to create this smarter way but am failing miserably. > > I don't have a phpmyadmin install to hand here to test against, but will > be surprised to learn that there is no "here is how you install on nginx" > on the phpmyadmin site, or no "here is how you configure phpmyadmin" on > the nginx web site -- it doesn't seem like an especially unusual thing > to want to do, and hopefully someone who has done it has advertised what > they did. > This is not as easily found as you might think[1]. Most instructions available assume a Linux platform. Further, many guides only provide instructions absent other configuration objectives, which, when incorporated into existing nginx.conf, breaks something or does not work; such as, my situation. The configuration I had was pieced from here[2] and here[3] after reading nginx.org/en/docs/http/ngx_http_core_module.html#location. >> Does location ~\.php$ coming before /phpmyadmin/(.*\.php)$ [...] not emulate the same situation I >> created in the first place > > No -- "nested" is different from "ordered". > > When nginx chooses the one location to use for a request, it chooses > the one top-level location{} block to use. Within that block, if there > are further choices to make, they are independent. > More clarification! Your explanations are really helpful. How will having ~\.php$ nested inside ^~ /phpmyadmin affect the main site (server root / sitename.com) WordPress administration of PHP? (I think you may have already answered this with your upcoming example.) >> An example would help immensely and be very much appreciated. > > The suggestion is along the lines of: > > location ^~ /phpmyadmin/ { > location ~ \.php$ { > # config for php scripts to be fastcgi_pass'd elsewhere > } > # config for static files to be served directly > } > > and then whatever other top-level location{} blocks that you want for > the rest of the server config -- possibly include a "location /" and a > "location ~ \.php$". > So, PHP directives, such as fastcgi_param SCRIPT_FILENAME, contained within the ~ \.php$ location nested inside the ^~ /phpmyadmin/ location will not apply to the rest of the site -- only to /phpmyadmin? The subsequent location ~ \.php$ applies to the rest of the site? I will attempt to implement the configuration you and John have provided and will report back with results. > > Overall, I find it helpful to think "what request am I making?", and > then "which location block will be used to handle it?". Followed by > "will that do what I want it to do?". > > And have a small test system that you can easily change things on > and check. > > Good luck with it, > > f > Francis, I really appreciate your assistance, your explanations are extremely helpful. I am sure it must be mundane educating Dummies, I am grateful for your time. Thank you very much. [0] Alias /phpmyadmin /usr/local/www/phpMyAdmin Order allow,deny Allow from all [1] http://lmgtfy.com/?q=freebsd+phpmyadmin+nginx [2] http://blog.stfalcon.com/2009/11/nginx-php-fpm-phpmyadmin/ [3] http://bin63.com/how-to-install-phpmyadmin-on-freebsd -- syn.bsdbox.co <- for dummies From nginx-list at puzzled.xs4all.nl Fri Jan 10 03:28:07 2014 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Fri, 10 Jan 2014 04:28:07 +0100 Subject: One link/area on a https site with a different SSL config? In-Reply-To: <1414428.j2NbxgsQF2@tornado> References: <52CF17DF.6020601@puzzled.xs4all.nl> <1414428.j2NbxgsQF2@tornado> Message-ID: <52CF68C7.6050806@puzzled.xs4all.nl> Hi Styopa, On 09-01-14 22:48, Styopa Semenukha wrote: > Patrick, > > It's not possible, because SSL works on lower level (session layer) than HTTP (application layer). Thank you for your feedback. That's unfortunate. I hope to see flexible SSL config one day as an enhancement (if possible). For now I guess I'll do IP based deny/allow instead. Regards, Patrick From lists at ruby-forum.com Fri Jan 10 08:13:44 2014 From: lists at ruby-forum.com (Andreas S.) Date: Fri, 10 Jan 2014 09:13:44 +0100 Subject: One link/area on a https site with a different SSL config? In-Reply-To: <52CF68C7.6050806@puzzled.xs4all.nl> References: <52CF17DF.6020601@puzzled.xs4all.nl> <1414428.j2NbxgsQF2@tornado> <52CF68C7.6050806@puzzled.xs4all.nl> Message-ID: Patrick Lists wrote in post #1132735: > On 09-01-14 22:48, Styopa Semenukha wrote: >> Patrick, >> >> It's not possible, because SSL works on lower level (session layer) than HTTP > (application layer). > > Thank you for your feedback. That's unfortunate. I hope to see flexible > SSL config one day as an enhancement (if possible). It is not possible, not with nginx nor any other web server. Read up on how the SSL handshake and HTTP over SSL works, and it should become clear. -- Posted via http://www.ruby-forum.com/. From igor at sysoev.ru Fri Jan 10 08:16:43 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 10 Jan 2014 12:16:43 +0400 Subject: One link/area on a https site with a different SSL config? In-Reply-To: References: <52CF17DF.6020601@puzzled.xs4all.nl> <1414428.j2NbxgsQF2@tornado> <52CF68C7.6050806@puzzled.xs4all.nl> Message-ID: <87E64380-0C14-4AAF-BA14-7FBFECD9FE7A@sysoev.ru> On Jan 10, 2014, at 12:13 , Andreas S. wrote: > Patrick Lists wrote in post #1132735: >> On 09-01-14 22:48, Styopa Semenukha wrote: >>> Patrick, >>> >>> It's not possible, because SSL works on lower level (session layer) than HTTP >> (application layer). >> >> Thank you for your feedback. That's unfortunate. I hope to see flexible >> SSL config one day as an enhancement (if possible). > > It is not possible, not with nginx nor any other web server. Read up on > how the SSL handshake and HTTP over SSL works, and it should become > clear. It is actually possible, at least Apache can do this with SSL renegotiation. But nginx currently does not support this. -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Fri Jan 10 08:50:05 2014 From: nginx-forum at nginx.us (bodomic) Date: Fri, 10 Jan 2014 03:50:05 -0500 Subject: monitoring cache statistics In-Reply-To: <83b1b49d0906011829k2196dbbfxd527c5a392287d84@forum.nginx.org> References: <83b1b49d0906011829k2196dbbfxd527c5a392287d84@forum.nginx.org> Message-ID: <471b8337ab223fa9ea606bd1953ffa92.NginxMailingListEnglish@forum.nginx.org> Hi people! Just a small note - this patch does not work with nginx-1.4.4 :) I think it's ok after 4.5 years, maybe there is a working version around? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2520,246221#msg-246221 From francis at daoine.org Fri Jan 10 09:36:44 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Jan 2014 09:36:44 +0000 Subject: PHP below server root not served In-Reply-To: <52CF63F6.2070901@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> <52CF63F6.2070901@bsdbox.co> Message-ID: <20140110093644.GG19804@craic.sysops.org> On Fri, Jan 10, 2014 at 02:07:34PM +1100, nano wrote: > On 10/01/2014 7:58 AM, Francis Daly wrote: Hi there, just some quick responses to parts... > It was my mistake assuming Apache logic[0] would be used. Yes -- in general, in Apache, the configuration that applies to a request can come from multiple places. In nginx, it comes from one location, plus the surrounding contexts. > I mean to say, even though location 4 contains its own config, the > config from location 2 is globally used, thus rendering location 4 > requests broken. Per nginx logic, location 4 is used for every request for which it is the "best match". It just happens that there are no such requests. It's the administrator's job to spot this "brokenness". > I still find this confusing: why has this move not broken the generic > location ~\.php$ block, which now comes after the location > /phpmyadmin/(.*\.php)$ block, rendering root requests (sitename.com) > broken? The inverse breaks the /phpmyadmin/(.*\.php)$ config. What request do you make? What (top-level) location{}s do you have? Which one location will be chosen for the request? Try those questions with each of the requests you care about, and see if you can see why it works. > >When nginx chooses the one location to use for a request, it chooses > >the one top-level location{} block to use. Within that block, if there > >are further choices to make, they are independent. > > More clarification! Your explanations are really helpful. How will > having ~\.php$ nested inside ^~ /phpmyadmin affect the main site > (server root / sitename.com) WordPress administration of PHP? (I think > you may have already answered this with your upcoming example.) The same questions apply: What request do you make? What (top-level) location{}s do you have? Which one location will be chosen for the request? > >The suggestion is along the lines of: > > > > location ^~ /phpmyadmin/ { > > location ~ \.php$ { At this point, you could instead use "location ~ ^/phpmyadmin/.*\.php$". It will match exactly the same requests -- can you see why? Depending on the rest of your setup, there may be a reason to use this. > So, PHP directives, such as fastcgi_param SCRIPT_FILENAME, contained > within the ~ \.php$ location nested inside the ^~ /phpmyadmin/ location > will not apply to the rest of the site -- only to /phpmyadmin? The > subsequent location ~ \.php$ applies to the rest of the site? Each http request is independent. Each nginx request is handled in one location. For each request, only the configuration in, or inherited into, the one location, applies. This may seem repetitive; that's because it is. Until you understand that point, you will not understand nginx configuration. Good luck with it, f -- Francis Daly francis at daoine.org From nanotek at bsdbox.co Fri Jan 10 09:37:36 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 20:37:36 +1100 Subject: PHP below server root not served In-Reply-To: <20140109205829.GE19804@craic.sysops.org> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> Message-ID: <52CFBF60.90509@bsdbox.co> On 10/01/2014 7:58 AM, Francis Daly wrote: > > The suggestion is along the lines of: > > location ^~ /phpmyadmin/ { > location ~ \.php$ { > # config for php scripts to be fastcgi_pass'd elsewhere > } > # config for static files to be served directly > } > > and then whatever other top-level location{} blocks that you want for > the rest of the server config -- possibly include a "location /" and a > "location ~ \.php$". > I have done some extensive testing, using the suggestions you and Jim provided. In summary, the nesting of "location ~ \.php$" inside "location ^~ /phpmyadmin/" does not work. The error logged is: [error] 50038#0: *7541 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: clientIP, server: domain.com, request: "GET /phpMyAdmin/ HTTP/1.1", upstream: "fastcgi://unix:/var/run/php-fpm.sock:", host: "domain.com" No matter what variation I tried[0], that was the repeated result. However, the nesting of "~ ^/phpMyAdmin/(.*\.php)$" inside "location ^~ /phpmyadmin/" does work. With everything else the same, this regular expression works. See the current working configuration, making use of your nesting advice: location ^~ /phpmyadmin { access_log off; rewrite ^ /phpMyAdmin/ permanent; } location /phpMyAdmin { root /usr/local/www; index index.php index.html; location ~ ^/phpMyAdmin/(.*\.php)$ { root /usr/local/www/; include conf.d/php-fpm; } } The included php-fpm[1] file is provided below. I do not know why your suggested regular expression does not work; however, your advice to nest does work. Thank you very much for your repeated efforts to help, it is very much appreciated. I am interested as to why this is more efficient and considered better practice than my former configuration[2]? Why the statement: "DoNotUseAlias (unless necessary)"? My original working setup[2] got the job done with only two location blocks. Your recommendations require three and a rewrite. Does this not create more work for the server as well as more lines of configuration? Francis, Jim; thank you both very much for your help. [0] example variations of your suggestions: https://cloud.bsdbox.co/public.php?service=files&t=a5ea2a41797b5845cd5c2bc5864d012b [1] conf.d/php-fpm: https://cloud.bsdbox.co/public.php?service=files&t=146de37c0f8db547022e6a164c4d14fe [2] location /phpmyadmin { alias /usr/local/www/phpMyAdmin/; index index.php index.html; } location ~ ^/phpmyadmin/(.*\.php)$ { root /usr/local/www/phpMyAdmin/; fastcgi_pass unix:/var/run/php-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME /usr/local/www/phpMyAdmin/$1; } [3] -- syn.bsdbox.co <- for dummies From nginx-forum at nginx.us Fri Jan 10 09:40:00 2014 From: nginx-forum at nginx.us (MarcPapers) Date: Fri, 10 Jan 2014 04:40:00 -0500 Subject: geoip_country_code header is deleted when I set another custom header Message-ID: Hello everybody. I have a nginx (version: nginx/0.7.67) running as a loadbalancer for two tomcat7 servers with the geoip module. It was working correctly until I changed the configuration file placed in available-locations called default (we work only with the default one). The changes were made to detect mobile traffic and include a custom header. Changes are shown below: ## BEGIN DEFAULT ## #--->content added after "server{" server { set $mobile_flag NO; ## regex for determining if it is mobile traffic ## if ($http_user_agent ~* "(android|bb\d+|meego).+mobile|avantgo|bada\/|blackberry|blazer|compal|elaine|fennec|hiptop|iemobile|ip(hone|od)|iris|kindle|lge |maemo|midp|mmp|netfront|opera m(ob|in)i|palm( os)?|phone|p(ixi|re)\/|plucker|pocket|psp|series(4|6)0|symbian|treo|up\.(browser|link)|vodafone|wap|windows (ce|phone)|xda|xiino") { set $mobile_flag YES; } if ($http_user_agent ~* "^(1207|6310|6590|3gso|4thp|50[1-6]i|770s|802s|a wa|abac|ac(er|oo|s\-)|ai(ko|rn)|al(av|ca|co)|amoi|an(ex|ny|yw)|aptu|ar(ch|go)|as(te|us)|attw|au(di|\-m|r |s )|avan|be(ck|ll|nq)|bi(lb|rd)|bl(ac|az)|br(e|v)w|bumb|bw\-(n|u)|c55\/|capi|ccwa|cdm\-|cell|chtm|cldc|cmd\-|co(mp|nd)|craw|da(it|ll|ng)|dbte|dc\-s|devi|dica|dmob|do(c|p)o|ds(12|\-d)|el(49|ai)|em(l2|ul)|er(ic|k0)|esl8|ez([4-7]0|os|wa|ze)|fetc|fly(\-|_)|g1 u|g560|gene|gf\-5|g\-mo|go(\.w|od)|gr(ad|un)|haie|hcit|hd\-(m|p|t)|hei\-|hi(pt|ta)|hp( i|ip)|hs\-c|ht(c(\-| |_|a|g|p|s|t)|tp)|hu(aw|tc)|i\-(20|go|ma)|i230|iac( |\-|\/)|ibro|idea|ig01|ikom|im1k|inno|ipaq|iris|ja(t|v)a|jbro|jemu|jigs|kddi|keji|kgt( |\/)|klon|kpt |kwc\-|kyo(c|k)|le(no|xi)|lg( g|\/(k|l|u)|50|54|\-[a-w])|libw|lynx|m1\-w|m3ga|m50\/|ma(te|ui|xo)|mc(01|21|ca)|m\-cr|me(rc|ri)|mi(o8|oa|ts)|mmef|mo(01|02|bi|de|do|t(\-| |o|v)|zz)|mt(50|p1|v )|mwbp|mywa|n10[0-2]|n20[2-3]|n30(0|2)|n50(0|2|5)|n7(0(0|1)|10)|ne((c|m)\-|on|tf|wf|wg|wt)|nok(6|i)|nzph|o2im|op(ti|wv)|oran|owg1|p800|pan(a|d|t)|pdxg|pg(13|\-([1-8]|c))|phil|pire|pl(ay|uc)|pn\-2|po(ck|rt|se)|prox|psio|pt\-g|qa\-a|qc(07|12|21|32|60|\-[2-7]|i\-)|qtek|r380|r600|raks|rim9|ro(ve|zo)|s55\/|sa(ge|ma|mm|ms|ny|va)|sc(01|h\-|oo|p\-)|sdk\/|se(c(\-|0|1)|47|mc|nd|ri)|sgh\-|shar|sie(\-|m)|sk\-0|sl(45|id)|sm(al|ar|b3|it|t5)|so(ft|ny)|sp(01|h\-|v\-|v )|sy(01|mb)|t2(18|50)|t6(00|10|18)|ta(gt|lk)|tcl\-|tdg\-|tel(i|m)|tim\-|t\-mo|to(pl|sh)|ts(70|m\-|m3|m5)|tx\-9|up(\.b|g1|si)|utst|v400|v750|veri|vi(rg|te)|vk(40|5[0-3]|\-v)|vm40|voda|vulc|vx(52|53|60|61|70|80|81|83|85|98)|w3c(\-| )|webc|whit|wi(g |nc|nw)|wmlb|wonu|x700|yas\-|your|zeto|zte\-)") { set $mobile_flag YES; } ... #-->line added after "menosbasico;" location / { proxy_pass http://xx; access_log /var/log/nginx/xx.access.log menosbasico; proxy_set_header ismobiletraffic $mobile_flag; } ## END DEFAULT FILE ## The proxy_set_header directive for geoip is placed in nginx.conf ## BEGIN NGINX.CONF ## ... ... http { access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; rewrite_log on; proxy_set_header X-Forwarded-For $remote_addr; geoip_country /usr/share/GeoIP/GeoIP.dat; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; ... ... ## END NGINX.CONF ## The problem is that now, the ismobiletraffic header is set correctly, but the geoip headers have disspeared from the request. Seems to have been overwritten. Should I place the mobile traffic detection on NGINX.CONF at http{ } level? I have been looking for a solution but I didn't find anything helpful. Thanks in advance for your tips and advices! Thanks, Marcos. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246224,246224#msg-246224 From nanotek at bsdbox.co Fri Jan 10 10:11:30 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 21:11:30 +1100 Subject: "Primary script unknown" wp-login.php In-Reply-To: References: <52CE2C47.8080905@bsdbox.co> <20140109110124.GC19804@craic.sysops.org> <52CE8C18.9040709@bsdbox.co> Message-ID: <52CFC752.20100@bsdbox.co> On 10/01/2014 3:46 AM, Miguel Clara wrote: >> I resolved this problem by making the /wordpress directory the server root. >> However, I now have the problem of /usr/local/www/phpMyAdmin being >> inaccessible, due to the same error. >> > > You can, and its probably best to use: > > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > > Also you should have those to in different nginx config files, its far > better to read/modify if needed. > In any case the .php block should work has long has the "root" is set > right in the different locations! > Thank you, Miguel. I have implemented your advice to use $document_root and I have also split configuration into separate files[0] as per your advice. [0] *nginx.conf* user www www; worker_processes 3; worker_priority 15; pid /var/run/nginx.pid; error_log /var/log/nginx-error.log crit; events { worker_connections 1024; accept_mutex on; use kqueue; } http { include conf.d/options; include mime.types; default_type application/octet-stream; access_log /var/log/nginx-access.log main buffer=32k; include sites/*.on; } +----[ eof ] *sites/site1.on* server { server_name site1.com www.site1.com; add_header Cache-Control "public"; add_header X-Frame-Options "DENY"; limit_req zone=gulag burst=200 nodelay; expires max; listen 80; listen 443 ssl; include conf.d/ssl; root /usr/local/www/site1; index index.html index.htm index.php; location = /favicon.ico { return 204; } location ~* \.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ { deny all; } location ~ /\. { deny all; access_log off; log_not_found off; } location / { root /usr/local/www/site1/wordpress; try_files $uri $uri/ /index.php?$args; location ~ \.php$ { include conf.d/php-fpm; } } include conf.d/phpmyadmin; location /management { root /usr/local/www/site1/administration; } location ~ \.php$ { include conf.d/php-fpm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } } +----[ eof ] *conf.d/phpmyadmin* location ^~ /phpmyadmin { access_log off; rewrite ^ /phpMyAdmin/ permanent; } location /phpMyAdmin { root /usr/local/www; index index.php index.html; location ~ ^/phpMyAdmin/(.*\.php)$ { root /usr/local/www/; include conf.d/php-fpm; } } +----[ eof ] *conf.d/php-fpm* fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; try_files $uri = 404; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_index index.php; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_param GEOIP_COUNTRY_CODE $geoip_country_code; fastcgi_param GEOIP_COUNTRY_NAME $geoip_country_name; include fastcgi_params; +----[ eof ] *conf.d/ssl* ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_certificate /path/to/crt-chain.pem; ssl_certificate_key /path/to/key.pem; ssl_dhparam /path/to/dhparam4096.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS !RC4"; ssl_ecdh_curve secp256r1; ssl_stapling on; ssl_stapling_verify on; ssl_prefer_server_ciphers on; +----[ eof ] *conf.d/options* client_body_timeout 5s; client_header_timeout 5s; keepalive_timeout 75s; send_timeout 15s; charset utf-8; #default_type application/octet-stream; #include /etc/mime.types; gzip off; gzip_static on; gzip_proxied any; ignore_invalid_headers on; keepalive_requests 50; keepalive_disable none; max_ranges 1; msie_padding off; open_file_cache max=1000 inactive=2h; open_file_cache_errors on; open_file_cache_min_uses 1; open_file_cache_valid 1h; output_buffers 1 512; postpone_output 1440; read_ahead 512K; recursive_error_pages on; reset_timedout_connection on; sendfile on; server_tokens off; server_name_in_redirect off; source_charset utf-8; tcp_nodelay on; tcp_nopush off; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; limit_req_zone $binary_remote_addr zone=gulag:1m rate=60r/m; log_format main '$remote_addr $host $remote_user [$time_local] "$request" $status $body_bytes_sent "$http_referer" "$http_user_agent" $ssl_cipher $request_time'; +----[ eof ] -- syn.bsdbox.co <- for dummies From ru at nginx.com Fri Jan 10 10:37:56 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 10 Jan 2014 14:37:56 +0400 Subject: geoip_country_code header is deleted when I set another custom header In-Reply-To: References: Message-ID: <20140110103756.GB65344@lo0.su> On Fri, Jan 10, 2014 at 04:40:00AM -0500, MarcPapers wrote: [...] > location / { > proxy_pass http://xx; > access_log /var/log/nginx/xx.access.log menosbasico; > proxy_set_header ismobiletraffic $mobile_flag; > } > > ## END DEFAULT FILE ## > > The proxy_set_header directive for geoip is placed in nginx.conf > > ## BEGIN NGINX.CONF ## > > ... > ... > > http { > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > rewrite_log on; > > proxy_set_header X-Forwarded-For $remote_addr; > > geoip_country /usr/share/GeoIP/GeoIP.dat; > proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; > > ... > ... > > ## END NGINX.CONF ## > > The problem is that now, the ismobiletraffic header is set correctly, but > the geoip headers have disspeared from the request. Seems to have been > overwritten. Should I place the mobile traffic detection on NGINX.CONF at > http{ } level? I have been looking for a solution but I didn't find anything > helpful. Thanks in advance for your tips and advices! It's expected and documented behavior: http://nginx.org/r/proxy_set_header : These directives are inherited from the previous level if and only if : there are no proxy_set_header directives defined on the current level. From ru at nginx.com Fri Jan 10 10:44:49 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Fri, 10 Jan 2014 14:44:49 +0400 Subject: monitoring cache statistics In-Reply-To: <471b8337ab223fa9ea606bd1953ffa92.NginxMailingListEnglish@forum.nginx.org> References: <83b1b49d0906011829k2196dbbfxd527c5a392287d84@forum.nginx.org> <471b8337ab223fa9ea606bd1953ffa92.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140110104449.GC65344@lo0.su> On Fri, Jan 10, 2014 at 03:50:05AM -0500, bodomic wrote: > Hi people! > > Just a small note - this patch does not work with nginx-1.4.4 :) > I think it's ok after 4.5 years, maybe there is a working version around? There's the http://nginx.org/en/docs/http/ngx_http_status_module.html module that includes cache statistics. It's currently available as part of NGINX Plus, http://nginx.com/products/ From nginx-forum at nginx.us Fri Jan 10 10:57:08 2014 From: nginx-forum at nginx.us (MarcPapers) Date: Fri, 10 Jan 2014 05:57:08 -0500 Subject: geoip_country_code header is deleted when I set another custom header In-Reply-To: <20140110103756.GB65344@lo0.su> References: <20140110103756.GB65344@lo0.su> Message-ID: <0daa9f075d96c7491184eb0bf7e9e7c9.NginxMailingListEnglish@forum.nginx.org> Thanks Ruslan, obviously at the level(http server) that I was setting the mobile header, it didn't use to inherit the geoip header because it was set in a previous level (nginx configuration). I didn't see it. Thanks for your help. What I did for doing it work properly was adding the geoip at the same level, resulting like that: location / { proxy_pass http://xx; access_log /var/log/nginx/xx.access.log menosbasico; proxy_set_header GEOIP_COUNTRY_CODE $geoip_country_code; proxy_set_header ismobiletraff $mobile_flag; } Thank you very much! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246224,246230#msg-246230 From nanotek at bsdbox.co Fri Jan 10 11:37:50 2014 From: nanotek at bsdbox.co (nano) Date: Fri, 10 Jan 2014 22:37:50 +1100 Subject: PHP below server root not served In-Reply-To: <20140110093644.GG19804@craic.sysops.org> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> <52CF63F6.2070901@bsdbox.co> <20140110093644.GG19804@craic.sysops.org> Message-ID: <52CFDB8E.4010703@bsdbox.co> On 10/01/2014 8:36 PM, Francis Daly wrote: > On Fri, Jan 10, 2014 at 02:07:34PM +1100, nano wrote: >> On 10/01/2014 7:58 AM, Francis Daly wrote: > > Per nginx logic, location 4 is used for every request for which it is the > "best match". It just happens that there are no such requests. It's the > administrator's job to spot this "brokenness". > Hopefully I can improve my ability to identify and correct mistakes. >> I still find this confusing: why has this move not broken the generic >> location ~\.php$ block, which now comes after the location >> /phpmyadmin/(.*\.php)$ block, rendering root requests (sitename.com) >> broken? >> >> [...] >> >> How will having ~\.php$ nested inside ^~ /phpmyadmin affect the main site >> (server root / sitename.com) WordPress administration of PHP? (I think >> you may have already answered this with your upcoming example.) > > The same questions apply: > > What request do you make? > > What (top-level) location{}s do you have? > > Which one location will be chosen for the request? > I will contemplate these questions and try to find the right answers. >>> The suggestion is along the lines of: >>> >>> location ^~ /phpmyadmin/ { >>> location ~ \.php$ { > > At this point, you could instead use "location ~ > ^/phpmyadmin/.*\.php$". It will match exactly the same requests -- > can you see why? > Is it because "~^ /phpmyadmin/.*\.php$" will be the longest prefix string *and* will be selected because even though nginx would find regular expressions in the configuration file that match the URI request, the "^~" modifier instructs nginx to not search the regular expressions?[0] > Depending on the rest of your setup, there may be a reason to use this. I am not sure if the rest of my setup provides reason to not implement this improved location block: ~^ /phpmyadmin/.*\.php$ The deployments are simply WordPress + phpMyAdmin + ownCloud + Roundcube with the last two on subdomain vhosts. >> So, PHP directives, such as fastcgi_param SCRIPT_FILENAME, contained >> within the ~ \.php$ location nested inside the ^~ /phpmyadmin/ location >> will not apply to the rest of the site -- only to /phpmyadmin? The >> subsequent location ~ \.php$ applies to the rest of the site? > > Each http request is independent. > > Each nginx request is handled in one location. > But, doesn't "...the location with the longest matching prefix is selected and remembered. *Then regular expressions are checked, in the order of their appearance in the configuration file. The search of regular expressions terminates on the first match, and the corresponding configuration is used.* If no match with a regular expression is found then the configuration of the prefix location remembered earlier is used" imply that all requests are subject to the entirety of the configuration file and not just a specific location block? For example, if one specifies a location, such as /example/.*\.php$ and assigns certain directives inside that location block, if there are other matching expressions (\.php$) in the configuration file, they could supersede any directives contained within the /example block simply *because they come before the /example block?* > For each request, only the configuration in, or inherited into, the one > location, applies. > > This may seem repetitive; that's because it is. Until you understand > that point, you will not understand nginx configuration. > It is good; repetition makes practice. And this point you reiterate is a rule I am struggling to understand but that needs to be understood. Thanks again, Francis. Your input is very helpful. Much obliged. [0] http://nginx.org/en/docs/http/ngx_http_core_module.html#location: "If the longest matching prefix location has the ?^~? modifier then regular expressions are not checked." -- syn.bsdbox.co <- for dummies From j at jonathanleighton.com Fri Jan 10 12:18:18 2014 From: j at jonathanleighton.com (Jon Leighton) Date: Fri, 10 Jan 2014 12:18:18 +0000 Subject: proxy_cache incorrectly returning 304 Not Modified In-Reply-To: <20140109144650.GK1835@mdounin.ru> References: <52CD674E.5080007@jonathanleighton.com> <20140109144650.GK1835@mdounin.ru> Message-ID: <52CFE50A.5020307@jonathanleighton.com> Hello Maxim, Thanks for your reply. >> Does this look like a bug? Or could it be a configuration issue? I can't >> think of any reason why this should be the correct thing for the proxy >> cache to do. > > This easily can be a result of a misconfiguration (e.g., > proxy_set_header used incorrectly) and/or backend > problem. > > Additionally, response headers looks very suspicious. There > shouldn't be "Status: 304 Not Modified" in nginx responses, and > "Connection: Close" capitalization doesn't match what nginx uses. > Unless it's something introduced by Pingdom interface, this may > indicate that it's checking something which isn't nginx. Thank you. I reviewed my configuration and made some tweaks, but I'm still having trouble. Actually I haven't seen the 304 Not Modified again, but now I am getting a blank response body for the cached page ("/"). For example: GET / HTTP/1.0 User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) Host: loco2.com 200 OK Cache-Control: max-age=600, public Content-Encoding: gzip Content-Type: text/html; charset=utf-8 Date: Fri, 10 Jan 2014 11:49:44 GMT ETag: "8a8ca149d65dd1343c60366876821659" Server: nginx/1.4.4 Status: 200 OK Strict-Transport-Security: max-age=31536000 Vary: Accept-Encoding X-Cache-Status: HIT X-Content-Type-Options: nosniff X-Frame-Options: SAMEORIGIN X-Request-Id: 0d85f7a2-cdbf-4288-a62d-05628abefdba X-UA-Compatible: chrome=1 X-XSS-Protection: 1; mode=block Connection: Close [empty response] Do you know of any reasons why the response might be blank? I copied the cache file so I could review it, and I checked that the X-Request-Id matches so it's definitely the same entry. The cache file *does* contain the response body - in gzip form. I am using "gzip off" for this location block, so I don't think nginx is interfering there. I'm at a complete loss about why this is happening, any ideas you have would be much appreciated. Thanks, Jon From j at jonathanleighton.com Fri Jan 10 12:54:21 2014 From: j at jonathanleighton.com (Jon Leighton) Date: Fri, 10 Jan 2014 12:54:21 +0000 Subject: proxy_cache incorrectly returning 304 Not Modified In-Reply-To: <52CFE50A.5020307@jonathanleighton.com> References: <52CD674E.5080007@jonathanleighton.com> <20140109144650.GK1835@mdounin.ru> <52CFE50A.5020307@jonathanleighton.com> Message-ID: <52CFED7D.6030004@jonathanleighton.com> On 10/01/14 12:18, Jon Leighton wrote: > Do you know of any reasons why the response might be blank? I copied the > cache file so I could review it, and I checked that the X-Request-Id > matches so it's definitely the same entry. The cache file *does* contain > the response body - in gzip form. I am using "gzip off" for this > location block, so I don't think nginx is interfering there. I'm at a > complete loss about why this is happening, any ideas you have would be > much appreciated. I just realised the obvious - that this is a problem for clients who don't ask for gzip. Will fix that. Don't think that explains the 304 issue though... From nginx-list at puzzled.xs4all.nl Fri Jan 10 13:22:44 2014 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Fri, 10 Jan 2014 14:22:44 +0100 Subject: One link/area on a https site with a different SSL config? In-Reply-To: <87E64380-0C14-4AAF-BA14-7FBFECD9FE7A@sysoev.ru> References: <52CF17DF.6020601@puzzled.xs4all.nl> <1414428.j2NbxgsQF2@tornado> <52CF68C7.6050806@puzzled.xs4all.nl> <87E64380-0C14-4AAF-BA14-7FBFECD9FE7A@sysoev.ru> Message-ID: <52CFF424.2030503@puzzled.xs4all.nl> On 10-01-14 09:16, Igor Sysoev wrote: > On Jan 10, 2014, at 12:13 , Andreas S. wrote: > >> Patrick Lists wrote in post #1132735: >>> On 09-01-14 22:48, Styopa Semenukha wrote: >>>> Patrick, >>>> >>>> It's not possible, because SSL works on lower level (session layer) than HTTP >>> (application layer). >>> >>> Thank you for your feedback. That's unfortunate. I hope to see flexible >>> SSL config one day as an enhancement (if possible). >> >> It is not possible, not with nginx nor any other web server. Read up on >> how the SSL handshake and HTTP over SSL works, and it should become >> clear. > > It is actually possible, at least Apache can do this with SSL renegotiation. > But nginx currently does not support this. Thanks Igor. It's good to know that it's possible with Apache. I prefer to stay with nginx so will use IP deny/allow for now. Regards, Patrick From francis at daoine.org Fri Jan 10 15:34:30 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 10 Jan 2014 15:34:30 +0000 Subject: PHP below server root not served In-Reply-To: <52CFDB8E.4010703@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> <52CF63F6.2070901@bsdbox.co> <20140110093644.GG19804@craic.sysops.org> <52CFDB8E.4010703@bsdbox.co> Message-ID: <20140110153430.GI19804@craic.sysops.org> On Fri, Jan 10, 2014 at 10:37:50PM +1100, nano wrote: > On 10/01/2014 8:36 PM, Francis Daly wrote: > >On Fri, Jan 10, 2014 at 02:07:34PM +1100, nano wrote: > >>On 10/01/2014 7:58 AM, Francis Daly wrote: Hi there, This mail is going to sound a bit negative. > >>> location ^~ /phpmyadmin/ { > >>> location ~ \.php$ { > > > >At this point, you could instead use "location ~ > >^/phpmyadmin/.*\.php$". It will match exactly the same requests -- > >can you see why? > > Is it because "~^ /phpmyadmin/.*\.php$" will be the longest prefix > string No. "^~" is not the same as "~^". "~^ /" is not the same as "~ ^/". Read everything very slowly and carefully. The order of the various squiggles matters. > >Each nginx request is handled in one location. > But, doesn't "...the location > used" imply that all requests are subject to the entirety of the > configuration file and not just a specific location block? No. Read it again. What way of phrasing it would allow you to understand that one location is chosen? Perhaps a documentation patch could be provided. > For example, if one specifies a location, such as /example/.*\.php$ and > assigns certain directives inside that location block, if there are > other matching expressions (\.php$) in the configuration file, they > could supersede any directives contained within the /example block > simply *because they come before the /example block?* No. One location is chosen. The configuration in any other location is irrelevant for this request. There is no superseding across locations. There is no merging across locations. There is only the configuration in, and inherited into, the one location that matters for this request. > It is good; repetition makes practice. And this point you reiterate is a > rule I am struggling to understand but that needs to be understood. After you accept that one location is chosen, then you can start wondering about what happens when there are nested locations, or when no locations match, and what happens when there are (e.g. rewrite module) directives outside of all locations. But until you accept that one location is chosen, you're unlikely to be comfortable making new nginx configurations. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Fri Jan 10 16:06:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 10 Jan 2014 20:06:06 +0400 Subject: proxy_cache incorrectly returning 304 Not Modified In-Reply-To: <52CFE50A.5020307@jonathanleighton.com> References: <52CD674E.5080007@jonathanleighton.com> <20140109144650.GK1835@mdounin.ru> <52CFE50A.5020307@jonathanleighton.com> Message-ID: <20140110160606.GR1835@mdounin.ru> Hello! On Fri, Jan 10, 2014 at 12:18:18PM +0000, Jon Leighton wrote: > Hello Maxim, > > Thanks for your reply. > >> Does this look like a bug? Or could it be a configuration issue? I can't > >> think of any reason why this should be the correct thing for the proxy > >> cache to do. > > > > This easily can be a result of a misconfiguration (e.g., > > proxy_set_header used incorrectly) and/or backend > > problem. > > > > Additionally, response headers looks very suspicious. There > > shouldn't be "Status: 304 Not Modified" in nginx responses, and > > "Connection: Close" capitalization doesn't match what nginx uses. > > Unless it's something introduced by Pingdom interface, this may > > indicate that it's checking something which isn't nginx. > > Thank you. I reviewed my configuration and made some tweaks, but I'm > still having trouble. Actually I haven't seen the 304 Not Modified > again, but now I am getting a blank response body for the cached page ("/"). > > For example: > > GET / HTTP/1.0 > User-Agent: Pingdom.com_bot_version_1.4_(http://www.pingdom.com/) > Host: loco2.com > > 200 OK > Cache-Control: max-age=600, public > Content-Encoding: gzip > Content-Type: text/html; charset=utf-8 > Date: Fri, 10 Jan 2014 11:49:44 GMT > ETag: "8a8ca149d65dd1343c60366876821659" > Server: nginx/1.4.4 > Status: 200 OK > Strict-Transport-Security: max-age=31536000 > Vary: Accept-Encoding > X-Cache-Status: HIT > X-Content-Type-Options: nosniff > X-Frame-Options: SAMEORIGIN > X-Request-Id: 0d85f7a2-cdbf-4288-a62d-05628abefdba > X-UA-Compatible: chrome=1 > X-XSS-Protection: 1; mode=block > Connection: Close > > [empty response] > > Do you know of any reasons why the response might be blank? I copied the > cache file so I could review it, and I checked that the X-Request-Id > matches so it's definitely the same entry. The cache file *does* contain > the response body - in gzip form. It looks like your Pingdom reports are a bit confusing, and most likely there is a body in the response, but Pingdom doesn't show it because it's gzipped. You may try testing with telnet / curl to see what actually goes on. > I am using "gzip off" for this > location block, so I don't think nginx is interfering there. I'm at a > complete loss about why this is happening, any ideas you have would be > much appreciated. The "gzip off" isn't relevant, as it won't try to compress anything with Content-Encoding already set anyway. Note though that nginx doesn't understand "Vary: Accept-Encoding", and will return cached response regardless of client's Accept-Encoding. To make sure gzipped responses aren't sent to clients without gzip support you should either disable caching of such responses, or switch off gzip of your backend (e.g., with "proxy_set_header Accept-Encoding '';"), or switch on gunzip in nginx (http://nginx.org/r/gunzip). Please also see here for some debugging hints: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From j at jonathanleighton.com Fri Jan 10 16:37:14 2014 From: j at jonathanleighton.com (Jon Leighton) Date: Fri, 10 Jan 2014 16:37:14 +0000 Subject: proxy_cache incorrectly returning 304 Not Modified In-Reply-To: <20140110160606.GR1835@mdounin.ru> References: <52CD674E.5080007@jonathanleighton.com> <20140109144650.GK1835@mdounin.ru> <52CFE50A.5020307@jonathanleighton.com> <20140110160606.GR1835@mdounin.ru> Message-ID: <52D021BA.1030506@jonathanleighton.com> Hello! On 10/01/14 16:06, Maxim Dounin wrote: > Note though that nginx doesn't understand "Vary: Accept-Encoding", > and will return cached response regardless of client's > Accept-Encoding. To make sure gzipped responses aren't sent to > clients without gzip support you should either disable caching of > such responses, or switch off gzip of your backend (e.g., with > "proxy_set_header Accept-Encoding '';"), or switch on gunzip in > nginx (http://nginx.org/r/gunzip). Thanks, this is the conclusion I came to also. Also I agree about Pingdom probably just not making it clear that the response was all binary. I am hopefully going to solve it by putting the compression method (based on Accept-Encoding) in the cache key. Not sure if the 304 issue will come back but it's a step in the right direction at least. Thanks, Jon From nginx-forum at nginx.us Fri Jan 10 20:22:45 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 10 Jan 2014 15:22:45 -0500 Subject: [ANN] Windows nginx 1.5.9.1 Cheshire Message-ID: 14:05 10-1-2014: nginx 1.5.9.1 Cheshire When she sleeps she gently purrs, you hardly know she's there, but when she wakes you're gonna hear her roar. nginx Cheshire release is here ! This native build runs on Windows XP SP3 and higher, both 32 and 64 bit. Based on nginx 1.5.9 (4-1-2014) with; + changed compile order + prove01.zip (onsite), a Windows Test_Suite way to show/prove it all really works + ngx_http_auth_ldap2 (experimental, https://github.com/kvspb/nginx-auth-ldap) follow examples on github site, not the site example in example.conf, this is an experimental build addition ! (when not used it won't affect anything else) + set-misc-nginx-module (https://github.com/agentzh/set-misc-nginx-module) + headers-more-nginx-module (https://github.com/agentzh/headers-more-nginx-module) + openssl-1.0.1f (upgraded 8-1-2014) + lua-nginx-module v0.9.4 (upgraded 9-1-2014) + Streaming with nginx-rtmp-module, v1.1.1 (upgraded 10-1-2014) + echo-nginx-module v0.50 (upgraded 8-1-2014) - RDNS has been removed until a blocking issue has been resolved + added http_auth_request_module + Source changes back ported + Source changes add-on's back ported * Additional specifications are like 19:46 18-12-2013: nginx 1.5.8.3 Caterpillar Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246256,246256#msg-246256 From kml865 at airpost.net Fri Jan 10 22:32:41 2014 From: kml865 at airpost.net (kml865 at airpost.net) Date: Fri, 10 Jan 2014 14:32:41 -0800 Subject: perl regex to extract "domain" and "extension" into variables from 'server_name' Message-ID: <1389393161.22839.69247761.706494A6@webmail.messagingengine.com> I want to extract just the "domain" and "extension" parts from an nginx server_name for use as variables later in nginx conf. For example, server_name ... ? ...>; ... location = /test.html { alias /local/path/to/$domain.$extension.html; } No matter what the server_name contains, from mydomain.tld to https://www.mydomain.tld:80/something and variations in between,the result should be $domain = mydomain $extension = tld I found this post "Perl Regex to get the root domain of a URL" http://stackoverflow.com/questions/15627892/perl-regex-to-get-the-root-domain-of-a-url that suggest this perl regex ^.*://(?:[wW]{3}\.)?([^:/]*).*$ works to extract & return "domain.tld" from any from of input URI. What's the right form of that regex in nginx's server_name to populate and variables for subsequent use? From agentzh at gmail.com Sat Jan 11 05:44:12 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 10 Jan 2014 21:44:12 -0800 Subject: [ANN] ngx_openresty mainline version 1.5.8.1 released Message-ID: Hello folks! I am happy to announce that the new mainline version of ngx_openresty, 1.5.8.1, is now released: http://openresty.org/#Download This is the first openresty release with the latest nginx 1.5.8 core bundled. And we have a lot of components updated as usual, which reflects the ongoing active development in this project. Special thanks go to all our contributors for making this happen! This release still reflects our current focus on stability and performance improvements. Getting things right and fast is always of our first priority. More speedup will come from both the ngx_lua module and LuaJIT v2.1 soon. But we may also add more new features in the near future to make more users happy :) Below is the complete change log for this release, as compared to the last (mainline) release, 1.4.3.9: * change: now we default to LuaJIT instead of the standard Lua 5.1 interpreter. the "--with-luajit" option for "./configure" is now the default. To use the standard Lua 5.1 interpreter, specify the "--with-lua51" option explicitly. thanks smallfish for the suggestion. * bugfix: Nginx's built-in resolver did not accept fully qualified domain names (with a trailing dot). * optimize: shortened the "Server" response header string "ngx_openresty" to "openresty". * upgraded the Nginx core to 1.5.8. * see the changes here: * upgraded LuaJIT to v2.1-20140109. * bugfix: fixed ABC (Array Bounds Check) elimination. (Mike Pall) * bugfix: fixed MinGW build. (Mike Pall) * bugfix: x86: fixed stack slot counting for IR_CALLA (affects table.new). (Mike Pall) this could lead to random table field missing issues in LuaRestyMySQLLibrary on i386. thanks lhmwzy for the report. * bugfix: fixed compilation of "string.byte(s, nil, n)". (Mike Pall) * bugfix: MIPS: Cosmetic fix for interpreter. (Mike Pall) * upgraded LuaNginxModule to 0.9.4. * feature: allow use of ngx.exit() in the context of header_filter_by_lua* to perform a "filter finalization". but in this context ngx.exit() is an asynchronous operation and returns immediately. * feature: added the optional 5th argument, "res_table", to ngx.re.match() which is the user-supplied result table for the resulting captures. This feature can give 12%+ speedup for simple ngx.re.match() calls with 4 submatch captures. * feature: ngx.escape_uri() and ngx.unescape_uri() now accept a "nil" argument, which is equivalent to an empty string. * feature: added new pure C API, "ngx_http_lua_ffi_max_regex_cache_size", for FFI-based implementations like LuaRestyCoreLibrary. * change: ngx.decode_base64() now only accepts string arguments. * bugfix: coroutines might incorrectly enter the "dead" state even right after creation with coroutine.create(). thanks James Hurst for the report. * bugfix: segmentation fault might happen when aborting a "light thread" pending on downstream cosocket writes. thanks Aviram Cohen for the report. * bugfix: we might try sending the response header again in ngx.exit() when the header was already sent. * bugfix: subrequests initiated by ngx.location.capture() might send their own response headers more than once. this issue might also lead to the alert message "header already sent" and request aborts when nginx 1.5.4+ was used. * bugfix: fixed incompatibilities in Nginx 1.5.8 which breaks the resolver API in the Nginx core. * bugfix: fixed a compilation warning when PCRE is disabled in the build. thanks Jay for the patch. * bugfix: we did not set the shortcut fields in "r->headers_in" for request headers in our subrequests created by ngx.location.capture*(), which might cause inter-operative issues with other Nginx modules. thanks Aviram Cohen for the original patch. * optimize: we no longer clear the "lua_State" pointers for dead "light threads" such that their coroutine context structs could be reused by other "light threads" and user coroutines. this can lead to smaller memory footprint. * doc: documented that the coroutine.* API can be used in init_by_lua* since 0.9.2. thanks Ruoshan Huang for the reminder. * upgraded LuaRestyMemcachedLibrary to 0.13. * optimize: saved one cosocket receive() call in the get() and gets() methods. * bugfix: the Memcached connection might enter a bad state when read timeout happens because LuaNginxModule's cosocket reading calls no longer automatically close the connection in this case. thanks Dane Knecht for the report. * upgraded LuaRestyRedisLibrary to 0.18. * optimize: eliminated one (potentially expensive) "string.sub()" call in the Redis reply parser. * bugfix: the Redis connection might enter a bad state when read timeout happens because LuaNginxModule's cosocket reading calls no longer automatically close the connection in this case. * upgraded LuaRestyLockLibrary to 0.02. * bugfix: the lock() method accepted nil keys silently. * upgraded LuaRestyDNSLibrary to 0.11. * bugfix: avoided use of the module() built-in to define the Lua module. * bugfix: we did not reject bad domain names with a leading dot. thanks Dane Knecht for the report. * bugfix: error handling fixes in the query and tcp_query methods. * upgraded LuaRestyCoreLibrary to 0.0.3. * feature: updated to comply with LuaNginxModule 0.9.4. * bugfix: resty.core.regex: the ngx.re API did not honour the lua_regex_cache_max_entries configuration directive. * optimize: ngx.re.gsub used to use literal type string "const char *" in ffi.cast() which is expensive in interpreter mode. now we use the ctype object directly, which leads to 11% in interpreter mode. * upgraded EchoNginxModule to 0.51. * bugfix: for Nginx 1.2.6+ and 1.3.9+, the main request reference count might go out of sync when Nginx's request body reader returned status code 300+. thanks Hungpu DU for the report. * bugfix: echo_request_body truncated the response body prematurely when the request body was in memory (because the request reader sets "last_buf" in this case). thanks Hungpu DU for the original patch. * bugfix: using $echo_timer_elapsed variable alone in the configuration caused segmentation faults. thanks Hungpu DU for the report. * doc: typo fix in the echo_foreach_split sample code. thanks Hungpu DU for the report. * upgraded DrizzleNginxModule to 0.1.7. * bugfix: fixed most of warnings and errors from the Microsoft Visual C++ compiler, reported by Edwin Cleton. * upgraded HeadersMoreNginxModule to 0.25. * bugfix: fixed a warning from the Microsoft C compiler. thanks Edwin Cleton for the report. * doc: documented the limitation that we cannot remove the "Connection" response header with this module. thanks Michael Orlando for bringing this up. * upgraded SetMiscNginxModule to 0.24. * bugfix: fixed the warnings from the Microsoft C compiler. thanks Edwin Cleton for the report. * upgraded SrcacheNginxModule to 0.25. * feature: now the value specified in srcache_store_skip is evaluated and tested again right after the end of the response body data stream is seen. thanks Eldar Zaitov for the patch. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1005008 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy and happy new year! Best regards, -agentzh From artemrts at ukr.net Sat Jan 11 21:45:47 2014 From: artemrts at ukr.net (wishmaster) Date: Sat, 11 Jan 2014 23:45:47 +0200 Subject: fastcgi_cache and 304 response Message-ID: <1389475557.326545676.9oj6nirb@frv34.ukr.net> Hi, I use nginx + php-fpm (via fcgi) and needed responses from php-server are putting into cache. I have one thought, could be better send cached pages to clients from cache with 304 code instead 200. So we must know time when response has been cached (something like variable) and send 304 response as long as page will be in cache (this time we know). Reading source codes I have not find any appropriate variable. Any ideas? From nginx-forum at nginx.us Sat Jan 11 22:36:26 2014 From: nginx-forum at nginx.us (kustodian) Date: Sat, 11 Jan 2014 17:36:26 -0500 Subject: No SPDY support in the official repository packages In-Reply-To: <000e6813fb1e7069ee13617faf43f2d1.NginxMailingListEnglish@forum.nginx.org> References: <6224078.WFQ0l5ZoG9@vbart-laptop> <000e6813fb1e7069ee13617faf43f2d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <825d83bf650ba5f2ec133180b5c1b86e.NginxMailingListEnglish@forum.nginx.org> It would be awesome if the repository would host two versions of the nginx package, one with SPDY for 6.5+ and one without it for older versions. You would need to build two versions, but it would be extremely easy to add this support, you would just need to create symbolic links 6.1, 6.2, 6.3 and 6.4 to point to directory 6 (for the non-SPDY version) on this URL http://nginx.org/packages/centos/6/, and create a new directory 6.5 which will host the version with SPDY. You would also need to change the nginx.repo from: http://nginx.org/packages/centos/6/$basearch/ to: http://nginx.org/packages/centos/$releasever/$basearch/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245553,246284#msg-246284 From nginx-forum at nginx.us Sun Jan 12 04:04:35 2014 From: nginx-forum at nginx.us (nginxmike) Date: Sat, 11 Jan 2014 23:04:35 -0500 Subject: 403 error for one of my server blocks Message-ID: <74de45987a501939d98b0a372f0dda8b.NginxMailingListEnglish@forum.nginx.org> Hi, I installed nginx on an ubuntu server and then followed this tutorial (which I'll quote from) to setup server blocks https://digitalocean.com/community/articles/how-to-configure-single-and-multiple-wordpress-site-settings-with-nginx .(I don't do any of the WordPress config in that article). It uses a common.conf that the server blocks inherit from. In one of my server blocks, I setup a basic index.html page and set my domain name for the server name in the server block file. In the directory for my other server block, I setup a node app. That's the one where I'm getting the 403 error. Anyone know why this might be happening and how I might debug and/or fix it? server block server { # URL: Correct way to redirect URL's server_name demo.com; rewrite ^/(.*)$ http://www.demo.com/$1 permanent; } server { server_name www.demo.com; root /home/demouser/sitedir; access_log /var/log/nginx/www.demo.com.access.log; error_log /var/log/nginx/www.demo.com.error.log; include global/common.conf; } common.conf # Global configuration file. # ESSENTIAL : Configure Nginx Listening Port listen 80; # ESSENTIAL : Default file to serve. If the first file isn't found, index index.php index.html index.htm; # ESSENTIAL : no favicon logs location = /favicon.ico { log_not_found off; access_log off; } # ESSENTIAL : robots.txt location = /robots.txt { allow all; log_not_found off; access_log off; } # ESSENTIAL : Configure 404 Pages error_page 404 /404.html; # ESSENTIAL : Configure 50x Pages error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/www; } # SECURITY : Deny all attempts to access hidden files .abcde location ~ /\. { deny all; } # PERFORMANCE : Set expires headers for static files and turn off logging. location ~* ^.+\.(js|css|swf|xml|txt|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|rss|atom|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { access_log off; log_not_found off; expires 30d; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246285,246285#msg-246285 From francis at daoine.org Sun Jan 12 09:44:07 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 12 Jan 2014 09:44:07 +0000 Subject: 403 error for one of my server blocks In-Reply-To: <74de45987a501939d98b0a372f0dda8b.NginxMailingListEnglish@forum.nginx.org> References: <74de45987a501939d98b0a372f0dda8b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140112094406.GL19804@craic.sysops.org> On Sat, Jan 11, 2014 at 11:04:35PM -0500, nginxmike wrote: Hi there, > In the directory for my other server block, I setup a node app. That's the > one where I'm getting the 403 error. > > Anyone know why this might be happening and how I might debug and/or fix > it? What http request do you make that does not give the response that you want? What does the error log say about that request? f -- Francis Daly francis at daoine.org From nanotek at bsdbox.co Sun Jan 12 10:27:23 2014 From: nanotek at bsdbox.co (nano) Date: Sun, 12 Jan 2014 21:27:23 +1100 Subject: PHP below server root not served In-Reply-To: <20140110153430.GI19804@craic.sysops.org> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> <52CF63F6.2070901@bsdbox.co> <20140110093644.GG19804@craic.sysops.org> <52CFDB8E.4010703@bsdbox.co> <20140110153430.GI19804@craic.sysops.org> Message-ID: <52D26E0B.7030404@bsdbox.co> On 11/01/2014 2:34 AM, Francis Daly wrote: > On Fri, Jan 10, 2014 at 10:37:50PM +1100, nano wrote: >> On 10/01/2014 8:36 PM, Francis Daly wrote: >>> On Fri, Jan 10, 2014 at 02:07:34PM +1100, nano wrote: >>>> On 10/01/2014 7:58 AM, Francis Daly wrote: > > Hi there, > > This mail is going to sound a bit negative. > I find your disposition pleasant and more than generous, not negative at all, Francis, and I am very grateful for your continued assistance. >>>>> location ^~ /phpmyadmin/ { >>>>> location ~ \.php$ { >>> >>> At this point, you could instead use "location ~ >>> ^/phpmyadmin/.*\.php$". It will match exactly the same requests -- >>> can you see why? >> >> Is it because "~^ /phpmyadmin/.*\.php$" will be the longest prefix >> string > > No. > > "^~" is not the same as "~^". "~^ /" is not the same as "~ ^/". > Another presumption on my part, however, where is the nginx regex documentation? I cannot seem to find it or even what syntax nginx uses. Also, what is the answer, I still cannot figure it out? > Read everything very slowly and carefully. The order of the various > squiggles matters. > >>> Each nginx request is handled in one location. > >> But, doesn't "...the location > >> used" imply that all requests are subject to the entirety of the >> configuration file and not just a specific location block? > > No. Read it again. > > What way of phrasing it would allow you to understand that one location > is chosen? Perhaps a documentation patch could be provided. > I found the request processing page [0] more explicit and, for me, comprehensible. A cursory look at any search engine result indicates a lot of people struggle with this point; your suggestion of a documentation patch might be a good idea. If I actually understood the logic of this particular point well enough I would happily contribute to the development of such documentation. >> For example, if one specifies a location, such as /example/.*\.php$ and >> assigns certain directives inside that location block, if there are >> other matching expressions (\.php$) in the configuration file, they >> could supersede any directives contained within the /example block >> simply *because they come before the /example block?* > > No. > > One location is chosen. > > The configuration in any other location is irrelevant for this request. > > There is no superseding across locations. There is no merging across > locations. There is only the configuration in, and inherited into, > the one location that matters for this request. > >> It is good; repetition makes practice. And this point you reiterate is a >> rule I am struggling to understand but that needs to be understood. > > After you accept that one location is chosen, then you can start wondering > about what happens when there are nested locations, or when no locations > match, and what happens when there are (e.g. rewrite module) directives > outside of all locations. > > But until you accept that one location is chosen, you're unlikely to be > comfortable making new nginx configurations. > > f > Thank you, Francis. I need to understand what each prefix and regex character is and what it does. For example, the documentation is clear that "^~" prefix will stop the search if it matches the request. However, there is nothing regarding "~^". This might help me better construct my location blocks and ensure the correct location is used for each request. [0] http://nginx.org/en/docs/http/request_processing.html -- syn.bsdbox.co <- for dummies From nginx-forum at nginx.us Sun Jan 12 13:57:42 2014 From: nginx-forum at nginx.us (futuredream) Date: Sun, 12 Jan 2014 08:57:42 -0500 Subject: Connection reset by peer and other problems In-Reply-To: References: <12a2cf2a344e2ea02f1d92e7c78133f0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <77c519757abc1dbbd337924aafa7f000.NginxMailingListEnglish@forum.nginx.org> Could you tell us how did you fix this problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,240477,246289#msg-246289 From ar at xlrs.de Sun Jan 12 17:42:18 2014 From: ar at xlrs.de (Axel) Date: Sun, 12 Jan 2014 18:42:18 +0100 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: References: Message-ID: <2f0e5d0adf8e6848275c96a8faad307c@xlrs.de> I juggled around with ssl ciphers and tried to disable RC4, but still be able to serve IE under WinXP. Those ciphers are my choice - if anyone has 'better' ciphers or prefers another order i am pleased to hear... ssl_ciphers ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA- AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA256:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES- CBC3-SHA:AES256-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!PSK:!RC4:!MD5:!LOW; You can test your ciphers online at https://www.ssllabs.com rgds Am 9.1.2014 10:29, schrieb Pekka.Panula at sofor.fi: > Hi > > My current values in my nginx configuration for ssl_protocols/ciphers > what i use is this: > > ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers RC4:HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > > What are todays recommendations for ssl_ciphers option for supporting > all current OSes and browsers, even Windows XP users with IE? > Can i disable RC4? > > My nginx is compiled with OpenSSL v1.0.1. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From list_nginx at bluerosetech.com Sun Jan 12 19:08:58 2014 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Sun, 12 Jan 2014 11:08:58 -0800 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: <2f0e5d0adf8e6848275c96a8faad307c@xlrs.de> References: <2f0e5d0adf8e6848275c96a8faad307c@xlrs.de> Message-ID: <52D2E84A.9060500@bluerosetech.com> On 1/12/2014 9:42 AM, Axel wrote: > I juggled around with ssl ciphers and tried to disable RC4, but still be > able to serve IE under WinXP. > > Those ciphers are my choice - if anyone has 'better' ciphers or prefers > another order i am pleased to hear... > > ssl_ciphers > ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-RSA- > AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-RSA-AES256-SHA256:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES- > > CBC3-SHA:AES256-SHA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!CAMELLIA:!DES:!PSK:!RC4:!MD5:!LOW; HIGH will add in only high-grade ciphers, so you don't need to add them manually or exclude export- and low-grade ciphers. You can use @STRENGTH to sort the list for you instead of doing it by hand: ssl_ciphers HIGH:!CAMELLIA:!RC4:!PSK:!aNULL:@STRENGTH; XP schannel (IE, Outlook, et al) lacks AES support, IE6 only does SSLv3. From nginx-forum at nginx.us Sun Jan 12 22:27:21 2014 From: nginx-forum at nginx.us (bidwell) Date: Sun, 12 Jan 2014 17:27:21 -0500 Subject: username mapping for imap/pop Message-ID: <77c43cb6fedac178c65a85193c271d49.NginxMailingListEnglish@forum.nginx.org> I need to map from username to login name where I wish to map from just username to username at domain.org for gmail and windomain\username for an exchange server. Is there a way for me to build in hooks to change the username before connecting. I can to that in my mailauth.pm module but don't know how to return the updated username. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246296,246296#msg-246296 From nginx-forum at nginx.us Mon Jan 13 02:14:37 2014 From: nginx-forum at nginx.us (wsl5shuai) Date: Sun, 12 Jan 2014 21:14:37 -0500 Subject: tefasaf Message-ID: <7fb2328388af2be8c89ab547ca26a416.NginxMailingListEnglish@forum.nginx.org> gsdgdsagdfsahfdahfdjhgshgfshjfs Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246297,246297#msg-246297 From nginx-forum at nginx.us Mon Jan 13 03:18:14 2014 From: nginx-forum at nginx.us (mrblah) Date: Sun, 12 Jan 2014 22:18:14 -0500 Subject: nginx port for socket.io Message-ID: <9797a71798afcaff541913aaaceae4b6.NginxMailingListEnglish@forum.nginx.org> I have a node application that uses websockets. I'm using a custom config file like this. However, when I post to the application, the post isn't appearing in the client side of the application. Since it's using websockets to communicate between client and server, i'm wondering if I have a problem with the port numbers. You can see in my config that the server is listening on 80, but the proxy_pass is set to localhost:3000. Should these numbers be the same? if so can I set Nginx to listen on 3000? /etc/nginx/conf.d/domainame.com.conf server { listen 80; server_name your-domain.com; location / { proxy_pass http://localhost:3000; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection 'upgrade'; proxy_set_header Host $host; proxy_cache_bypass $http_upgrade; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246293,246293#msg-246293 From atynefield at gmail.com Mon Jan 13 04:05:20 2014 From: atynefield at gmail.com (Andy Tynefield) Date: Sun, 12 Jan 2014 22:05:20 -0600 Subject: nginx port for socket.io In-Reply-To: <9797a71798afcaff541913aaaceae4b6.NginxMailingListEnglish@forum.nginx.org> References: <9797a71798afcaff541913aaaceae4b6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2FC65490-00D8-45B8-898A-3DE4FEECCE29@gmail.com> The nginx configuration provided is valid for this use case. Ensure that the browser is attempting to connect to the domain on port 80 for the socket io stuff and ensure that socket io is listening on the same port along with the node app. [Sent from Andrew's iPhone] > On Jan 12, 2014, at 9:18 PM, "mrblah" wrote: > > I have a node application that uses websockets. I'm using a custom config > file like this. However, when I post to the application, the post isn't > appearing in the client side of the application. Since it's using > websockets to communicate between client and server, i'm wondering if I have > a problem with the port numbers. You can see in my config that the server is > listening on 80, but the proxy_pass is set to localhost:3000. Should these > numbers be the same? if so can I set Nginx to listen on 3000? > > /etc/nginx/conf.d/domainame.com.conf > > server { > listen 80; > > server_name your-domain.com; > > location / { > proxy_pass http://localhost:3000; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection 'upgrade'; > proxy_set_header Host $host; > proxy_cache_bypass $http_upgrade; > } > } > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246293,246293#msg-246293 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Mon Jan 13 06:38:09 2014 From: nginx-forum at nginx.us (bodomic) Date: Mon, 13 Jan 2014 01:38:09 -0500 Subject: monitoring cache statistics In-Reply-To: <20140110104449.GC65344@lo0.su> References: <20140110104449.GC65344@lo0.su> Message-ID: <0527b8ba5b9e0717b8c2c1f056f996e2.NginxMailingListEnglish@forum.nginx.org> Right, I have found that too, sorry for disturbance, thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,2520,246303#msg-246303 From igor at sysoev.ru Mon Jan 13 07:58:20 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 13 Jan 2014 11:58:20 +0400 Subject: username mapping for imap/pop In-Reply-To: <77c43cb6fedac178c65a85193c271d49.NginxMailingListEnglish@forum.nginx.org> References: <77c43cb6fedac178c65a85193c271d49.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7886C61C-E5F6-4795-B65B-230CA99E4221@sysoev.ru> On Jan 13, 2014, at 2:27 , bidwell wrote: > I need to map from username to login name where I wish to map from just > username to username at domain.org for gmail and windomain\username for an > exchange server. Is there a way for me to build in hooks to change the > username before connecting. I can to that in my mailauth.pm module but > don't know how to return the updated username. You can return it via header: Auth-User: user at domain.org -- Igor Sysoev http://nginx.com From ar at xlrs.de Mon Jan 13 08:59:54 2014 From: ar at xlrs.de (Axel) Date: Mon, 13 Jan 2014 09:59:54 +0100 Subject: SSL ciphers, disable or not to disable RC4? In-Reply-To: <52D2E84A.9060500@bluerosetech.com> References: <2f0e5d0adf8e6848275c96a8faad307c@xlrs.de> <52D2E84A.9060500@bluerosetech.com> Message-ID: Am 12.1.2014 20:08, schrieb Darren Pilgrim: > HIGH will add in only high-grade ciphers, so you don't need to add them > manually or exclude export- and low-grade ciphers. You can > use @STRENGTH to sort the list for you instead of doing it by hand: > > ssl_ciphers HIGH:!CAMELLIA:!RC4:!PSK:!aNULL:@STRENGTH; > > XP schannel (IE, Outlook, et al) lacks AES support, IE6 only does > SSLv3. thx for this info. i'll check the differences. rgds From nginx-forum at nginx.us Mon Jan 13 17:52:13 2014 From: nginx-forum at nginx.us (bidwell) Date: Mon, 13 Jan 2014 12:52:13 -0500 Subject: username mapping for imap/pop In-Reply-To: <7886C61C-E5F6-4795-B65B-230CA99E4221@sysoev.ru> References: <7886C61C-E5F6-4795-B65B-230CA99E4221@sysoev.ru> Message-ID: <78fe5f21779362a67d5d609cefc3ce0c.NginxMailingListEnglish@forum.nginx.org> Thank you. That is what I thought and have implimented. Tracing the problem further, my imap connection successfully authenticates and then gets a "connection closed by foreign host" going through nginx. The pop connection works just fine, but imap fails. I have nginx configured to enter on port 143 and go out to 127.0.0.1:143 where it goes through stunnel to go to imap.gmail.com:993. If I talk directly to 127.0.0.1:143 (to stunnel) it works. If I talk to nginx, it authenticates, logs correct username, target IP and port, gets the Capability list and registers a successful login to the remote (gmail) imap server and then closes the connection immediately. The following is a transcript of the telnet session: telnet nginx:143 * OK IMAP4 ready a1 LOGIN user at example.com password * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH a1 OK user at example.com first_name Last_name authenticated (Success) Connection closed by foreign host. Any suggestions as to what to try next to diagnose this? (Thanks in advance) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246296,246341#msg-246341 From multiformeingegno at gmail.com Mon Jan 13 22:12:49 2014 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Mon, 13 Jan 2014 23:12:49 +0100 Subject: fastcgi_cache_path empty Message-ID: I wanted to try fastcgi_cache on my nginx 1.5.8 as shown here http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php In nginx conf, http section, I added: fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m; In server section: set $cache_uri $request_uri; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $cache_uri 'null cache'; } if ($query_string != "") { set $cache_uri 'null cache'; } # Don't cache uris containing the following segments if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $cache_uri 'null cache'; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { set $cache_uri 'null cache'; } location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi.conf; fastcgi_pass unix:/var/run/php5-fpm.sock; ## # Fastcgi cache ## set $skip_cache 1; if ($cache_uri != "null cache") { add_header X-Cache-Debug "$cache_uri $cookie_nocache $arg_nocache$arg_comment $http_pragma $http_authorization"; set $skip_cache 0; } fastcgi_cache_bypass $skip_cache; fastcgi_cache_key $scheme$host$request_uri$request_method; fastcgi_cache_valid any 8m; fastcgi_cache_bypass $http_pragma; fastcgi_cache_use_stale updating error timeout invalid_header http_500; } I chowned /var/cache/nginx to www-data user (and group) and chmodded it to 775. I restarted nginx but the folder is always empty. Is it normal? How can I test if fastcgi_cache is working? Thanks in advance -------------- next part -------------- An HTML attachment was scrubbed... URL: From renenglish at gmail.com Tue Jan 14 08:52:47 2014 From: renenglish at gmail.com (Shafreeck Sea) Date: Tue, 14 Jan 2014 16:52:47 +0800 Subject: Does it possible to submit duplicated request with the proxy_next_upstream on In-Reply-To: <9B89B2DD-EB2A-4126-AB7D-9E86972866A0@gmail.com> References: <9B89B2DD-EB2A-4126-AB7D-9E86972866A0@gmail.com> Message-ID: Can any one help ? 2014/1/3 ??? > Hi all: > I am wondering if I set: > proxy_next_upstream error timeout; > Fox example , if the requested service is a counter , I issue the request > use the interface http://example.com/incr . The request is failed on my > first host A, then it is passed to the second host B , is the counter > likely be added twice ? > > thanks . -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 14 09:48:48 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 14 Jan 2014 04:48:48 -0500 Subject: Does it possible to submit duplicated request with the proxy_next_upstream on In-Reply-To: References: Message-ID: Unless the request is getting que'd while there is a short wait for host A to get online AND fail-over is also happening, its not likely to be added twice. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245979,246388#msg-246388 From nginx-forum at nginx.us Tue Jan 14 14:55:15 2014 From: nginx-forum at nginx.us (Xeron) Date: Tue, 14 Jan 2014 09:55:15 -0500 Subject: limit proxy_next_upstream In-Reply-To: <20130405212643.GB62550@mdounin.ru> References: <20130405212643.GB62550@mdounin.ru> Message-ID: <7a62eccc24d33263576604562df989fb.NginxMailingListEnglish@forum.nginx.org> Is there any update about this feature? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,238124,246431#msg-246431 From bdavis at lakeshoreint.com Tue Jan 14 16:22:36 2014 From: bdavis at lakeshoreint.com (Brian Davis) Date: Tue, 14 Jan 2014 10:22:36 -0600 Subject: Bounty for #416 Message-ID: Like the ticket creator says in the description, always serving cached versions of pages would be extremely cool, so I wanted to let people know I just offered a $500 bounty for http://trac.nginx.org/nginx/ticket/416 at Bountysource. https://www.bountysource.com/issues/972735-proxy_cache_use_stale-run-updating-in-new-thread-and-serve-stale-data-to-all -------------- next part -------------- An HTML attachment was scrubbed... URL: From eirikur at nilsson.is Tue Jan 14 16:22:44 2014 From: eirikur at nilsson.is (=?ISO-8859-1?Q?Eir=EDkur_Nilsson?=) Date: Tue, 14 Jan 2014 16:22:44 +0000 Subject: Websocket tunnel broken with existing SSL session Message-ID: We've been debugging this issue for 3 days now and even though we have a temporary fix, we're still puzzled about it. There is an iOS app, which opens a websocket connection to our server over SSL. Our server runs SmartOS and has nginx 1.5.0 (also happens on 1.4.1) proxying to a backend server running in NodeJS. To reproduce, I start my app, a websocket connection is established and works well, then I put the app to sleep for awhile until nginx kills the connection. When I reopen the app, the following happens: 1) App notices that the connection is dead and reconnects. 2) Behind the scenes, iOS reuses the SSL session from before and quickly opens a new socket. 3) A HTTP upgrade request and response flow across with no problems. 4) With a successful web-socket established on both sides, the client starts sending frames. However, none of these gets delivered to the backend server. 5) After a minute, nginx kills the connection even though the client is sending periodic pings. 6) Back to 1. I haven't managed to reduce the test case or reproduce it in another environment yet. This only happens when using SSL. In wireshark I see the websocket frames being sent from the iPhone client and TCP acked properly. What currently fixes the problem is to disable SSL session reuse in nginx. Then every websocket connection works like it should. Here is the config before the fix: ### server { ### Server port and name ### listen 80 default_server; listen 443 default_server ssl; server_name test.mydomain.com; ### SSL cert files ### ssl_certificate /opt/local/etc/nginx/ssl/certificate.crt; ssl_certificate_key /opt/local/etc/nginx/ssl/certificate.key; ### SSL specific settings ### ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; keepalive_timeout 60; client_max_body_size 10m; location / { access_log off; proxy_pass http://localhost:3003; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # WebSocket support (nginx 1.4) proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } Best regards, Eirikur Nilsson -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 14 16:42:06 2014 From: nginx-forum at nginx.us (xfeep) Date: Tue, 14 Jan 2014 11:42:06 -0500 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java Message-ID: <2a580d29d8829d51e76d640878d4c545.NginxMailingListEnglish@forum.nginx.org> Hi! Nginx-Clojure Module Release 0.1.0 was out several day ago! It is a module for embedding Clojure or Java programs, typically those Ring based handlers. It is an open source project hosted on Github, the site url is https://github.com/xfeep/nginx-clojure With it we can develope high performance Clojure/Java Web App on Nginx without any Java web server. By the way the result of simple performance test with Nginx-Clojure is inspiring, more details can be got from https://github.com/ptaoussanis/clojure-web-server-benchmarks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,246437#msg-246437 From nginx-forum at nginx.us Tue Jan 14 18:01:53 2014 From: nginx-forum at nginx.us (bidwell) Date: Tue, 14 Jan 2014 13:01:53 -0500 Subject: username mapping for imap/pop In-Reply-To: <78fe5f21779362a67d5d609cefc3ce0c.NginxMailingListEnglish@forum.nginx.org> References: <7886C61C-E5F6-4795-B65B-230CA99E4221@sysoev.ru> <78fe5f21779362a67d5d609cefc3ce0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <13b4ccbcbeeee58358d89614070d3fb6.NginxMailingListEnglish@forum.nginx.org> My nginx error.log shows the following: *5 upstream sent invalid response: "* CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH a1 OK test at example.com Test User authenticated (Success)" while reading response from upstream,... It appears to not like google's CAPABILITY line. Is it too long? Any suggestions? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246296,246442#msg-246442 From nginx-forum at nginx.us Tue Jan 14 21:26:41 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 14 Jan 2014 16:26:41 -0500 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings Message-ID: <7a12701d725e5cf7f60251dbd735763b.NginxMailingListEnglish@forum.nginx.org> src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In ngx_http_spdy_send_chain(ngx_connection_t *fc, ngx_chain_t *in, off_t limit): src/http/ngx_http_spdy_filter_module.c(682) : warning C4244: 'function' : conversion from 'off_t' to 'size_t', possible loss of data src/http/ngx_http_spdy_filter_module.c(701) : warning C4244: '-=' : conversion from 'off_t' to 'size_t', possible loss of data src/http/ngx_http_spdy_filter_module.c(715) : warning C4244: 'function' : conversion from 'off_t' to 'size_t', possible loss of data src/http/ngx_http_spdy_filter_module.c(751) : warning C4244: '=' : conversion from 'off_t' to 'size_t', possible loss of data src/http/ngx_http_spdy_filter_module.c(757) : warning C4244: 'function' : conversion from 'off_t' to 'size_t', possible loss of data Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246444,246444#msg-246444 From vbart at nginx.com Tue Jan 14 21:30:23 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 15 Jan 2014 01:30:23 +0400 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: <7a12701d725e5cf7f60251dbd735763b.NginxMailingListEnglish@forum.nginx.org> References: <7a12701d725e5cf7f60251dbd735763b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <10611491.O6j1SgJf5e@vbart-laptop> On Tuesday 14 January 2014 16:26:41 itpp2012 wrote: > src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings > > In ngx_http_spdy_send_chain(ngx_connection_t *fc, ngx_chain_t *in, off_t > limit): > > src/http/ngx_http_spdy_filter_module.c(682) : warning C4244: 'function' : > conversion from 'off_t' to 'size_t', possible loss of data > > src/http/ngx_http_spdy_filter_module.c(701) : warning C4244: '-=' : > conversion from 'off_t' to 'size_t', possible loss of data > > src/http/ngx_http_spdy_filter_module.c(715) : warning C4244: 'function' : > conversion from 'off_t' to 'size_t', possible loss of data > > src/http/ngx_http_spdy_filter_module.c(751) : warning C4244: '=' : > conversion from 'off_t' to 'size_t', possible loss of data > > src/http/ngx_http_spdy_filter_module.c(757) : warning C4244: 'function' : > conversion from 'off_t' to 'size_t', possible loss of data > What compiler are you using? wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Jan 14 21:33:39 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 14 Jan 2014 16:33:39 -0500 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: <10611491.O6j1SgJf5e@vbart-laptop> References: <10611491.O6j1SgJf5e@vbart-laptop> Message-ID: VC 2010, 32bit mode. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246444,246446#msg-246446 From vbart at nginx.com Tue Jan 14 21:47:52 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 15 Jan 2014 01:47:52 +0400 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: References: <10611491.O6j1SgJf5e@vbart-laptop> Message-ID: <34746228.b0iVgnacIt@vbart-laptop> On Tuesday 14 January 2014 16:33:39 itpp2012 wrote: > VC 2010, 32bit mode. > Ok. Please, try a patch below: diff -r 439d05a037a3 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 01:44:52 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 01:44:57 2014 +0400 @@ -35,7 +35,7 @@ static ngx_inline ngx_int_t ngx_http_spd ngx_connection_t *fc, ngx_http_spdy_stream_t *stream); static ngx_chain_t *ngx_http_spdy_filter_get_shadow( - ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, size_t offset, + ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, off_t offset, size_t size); static ngx_http_spdy_out_frame_t *ngx_http_spdy_filter_get_data_frame( ngx_http_spdy_stream_t *stream, size_t len, ngx_chain_t *first, @@ -702,7 +702,7 @@ ngx_http_spdy_send_chain(ngx_connection_ *ln = cl; ln = &cl->next; - rest -= size; + rest -= (size_t) size; in = in->next; if (in == NULL) { @@ -752,7 +752,7 @@ ngx_http_spdy_send_chain(ngx_connection_ } if (limit < (off_t) slcf->chunk_size) { - frame_size = limit; + frame_size = (size_t) limit; } } } @@ -777,7 +777,7 @@ ngx_http_spdy_send_chain(ngx_connection_ static ngx_chain_t * ngx_http_spdy_filter_get_shadow(ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, - size_t offset, size_t size) + off_t offset, size_t size) { ngx_buf_t *chunk; ngx_chain_t *cl; From francis at daoine.org Tue Jan 14 22:12:52 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 14 Jan 2014 22:12:52 +0000 Subject: PHP below server root not served In-Reply-To: <52D26E0B.7030404@bsdbox.co> References: <52CE78BC.7050500@bsdbox.co> <52CE98FC.8000405@bsdbox.co> <52CEA95C.3020300@bsdbox.co> <20140109205829.GE19804@craic.sysops.org> <52CF63F6.2070901@bsdbox.co> <20140110093644.GG19804@craic.sysops.org> <52CFDB8E.4010703@bsdbox.co> <20140110153430.GI19804@craic.sysops.org> <52D26E0B.7030404@bsdbox.co> Message-ID: <20140114221252.GO19804@craic.sysops.org> On Sun, Jan 12, 2014 at 09:27:23PM +1100, nano wrote: > On 11/01/2014 2:34 AM, Francis Daly wrote: > >>On 10/01/2014 8:36 PM, Francis Daly wrote: > >>>>On 10/01/2014 7:58 AM, Francis Daly wrote: Hi there, > >>>>> location ^~ /phpmyadmin/ { > >>>>> location ~ \.php$ { > >>> > >>>At this point, you could instead use "location ~ > >>>^/phpmyadmin/.*\.php$". It will match exactly the same requests -- > >>>can you see why? > Another presumption on my part, however, where is the nginx regex > documentation? I cannot seem to find it or even what syntax nginx uses. I don't think I've ever looked for nginx regex documentation. I think I've always just used "normal" regex characters, and they worked. It's not difficult to test. Example below. One thing that was not immediately obvious to me from the documentation, was that the whitespace separating the uri from the modifier in the location directive is optional. But it shows up pretty quickly in testing (and is clear from the "real" documentation, which is in the directory marked "src"). So: knowing that, it may make it easier to interpret other people's config files. > Also, what is the answer, I still cannot figure it out? Which requests would match the original top-level prefix location? Of those, which would match the first suggested nested regex location? And which would match the second suggested nested regex location? Are there any that would match only one of the two suggested nested regex locations? If so, they don't match exactly the same requests. > Thank you, Francis. I need to understand what each prefix and regex > character is and what it does. Fair enough. There are four modifiers (plus "no modifier"; plus @, which is separate). > For example, the documentation is clear > that "^~" prefix will stop the search if it matches the request. Yes, that's one of the four modifiers. > However, there is nothing regarding "~^". That's not one of the four modifiers. It is one of the four modifiers followed by something else. There's also nothing explicit regarding "~A". Or "~\.". (If it doesn't start with one of the modifiers, then it is "no modifier", which means "prefix string". If *that* doesn't start with /, it is unlikely to be useful.) Could """nginx: [emerg] invalid location modifier "~^" in /usr/local/nginx/conf/nginx.conf:29""" be made clearer? The only way you won't get that is if you don't have a separate uri in the directive. > This might help me better > construct my location blocks and ensure the correct location is used for > each request. "starts with ~" means "it's a regex". Everything after the ~ is the regex. Apart from one documented case which could not be a regex anyway. If you have as your server block the following: === server { listen 8888; location / { return 200 "location / \n"; } location ~A { return 200 "location ~A \n"; } location ~ \. { return 200 "location ~ \. \n"; } location = / { return 200 "location = / \n"; } } === Then can you predict the response to each of: curl http://localhost:8888/path curl http://localhost:8888/ curl http://localhost:8888/A.A Add your sample locations, make your sample requests, understand why each location was chosen each time. Add debug_connection 127.0.0.1; to the "events{}" block if you want to see lots more in the error log. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Jan 14 22:36:23 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 14 Jan 2014 17:36:23 -0500 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: <34746228.b0iVgnacIt@vbart-laptop> References: <34746228.b0iVgnacIt@vbart-laptop> Message-ID: <288efa2ca4e2e0f2a9b900b158baa4d1.NginxMailingListEnglish@forum.nginx.org> You missed 2, Line +-683: if (offset) { cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, // offset, size); (off_t) offset, (size_t) size); if (cl == NULL) { return NGX_CHAIN_ERROR; } and +-Line 760: if (offset) { // cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, offset, size); cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, (off_t) offset, (size_t) size); if (cl == NULL) { return NGX_CHAIN_ERROR; } Additional warning +-line 684: if (offset) { cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, (off_t) offset, (size_t) size); if (cl == NULL) { return NGX_CHAIN_ERROR; } offset = 0; src\http\ngx_http_spdy_filter_module.c(685): warning C4701: potentially uninitialized local variable 'cl' used When I add +-line 629: ngx_http_spdy_stream_t *stream; ngx_http_spdy_loc_conf_t *slcf; ngx_http_spdy_out_frame_t *frame; + cl = NULL; The warning is gone. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246444,246449#msg-246449 From mdounin at mdounin.ru Tue Jan 14 22:53:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Jan 2014 02:53:31 +0400 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: References: Message-ID: <20140114225331.GO1835@mdounin.ru> Hello! On Tue, Jan 14, 2014 at 04:22:44PM +0000, Eir?kur Nilsson wrote: > We've been debugging this issue for 3 days now and even though we have a > temporary fix, we're still puzzled about it. > > There is an iOS app, which opens a websocket connection to our server over > SSL. Our server runs SmartOS and has nginx 1.5.0 (also happens on 1.4.1) > proxying to a backend server running in NodeJS. > > To reproduce, I start my app, a websocket connection is established and > works well, then I put the app to sleep for awhile until nginx kills the > connection. When I reopen the app, the following happens: > > 1) App notices that the connection is dead and reconnects. > 2) Behind the scenes, iOS reuses the SSL session from before and quickly > opens a new socket. > 3) A HTTP upgrade request and response flow across with no problems. > 4) With a successful web-socket established on both sides, the client > starts sending frames. However, none of these gets delivered to the backend > server. > 5) After a minute, nginx kills the connection even though the client is > sending periodic pings. > 6) Back to 1. > > I haven't managed to reduce the test case or reproduce it in another > environment yet. This only happens when using SSL. In wireshark I see the > websocket frames being sent from the iPhone client and TCP acked properly. > > What currently fixes the problem is to disable SSL session reuse in nginx. > Then every websocket connection works like it should. > > Here is the config before the fix: > ### > server { > ### Server port and name ### [...] Which event method is used? If eventport, try switching to /dev/poll instead (which is expected to be used by default on SmartOS and other Solaris variants), it should fix the issue. The eventport event method is known to have problems when proxying and this may cause symptoms you see, it needs attention. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Tue Jan 14 23:04:07 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 15 Jan 2014 03:04:07 +0400 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: <288efa2ca4e2e0f2a9b900b158baa4d1.NginxMailingListEnglish@forum.nginx.org> References: <34746228.b0iVgnacIt@vbart-laptop> <288efa2ca4e2e0f2a9b900b158baa4d1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3792627.sgHauDpPRA@vbart-laptop> On Tuesday 14 January 2014 17:36:23 itpp2012 wrote: > You missed 2, > > Line +-683: > if (offset) { > cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, > // offset, size); > (off_t) offset, > (size_t) size); > if (cl == NULL) { > return NGX_CHAIN_ERROR; > } > > and +-Line 760: > if (offset) { > // cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, offset, > size); > cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, (off_t) > offset, (size_t) size); > if (cl == NULL) { > return NGX_CHAIN_ERROR; > } > > > Additional warning +-line 684: > if (offset) { > cl = ngx_http_spdy_filter_get_shadow(stream, in->buf, > (off_t) offset, (size_t) > size); > if (cl == NULL) { > return NGX_CHAIN_ERROR; > } > > offset = 0; > > src\http\ngx_http_spdy_filter_module.c(685): warning C4701: potentially > uninitialized local variable 'cl' used > > When I add +-line 629: > ngx_http_spdy_stream_t *stream; > ngx_http_spdy_loc_conf_t *slcf; > ngx_http_spdy_out_frame_t *frame; > + cl = NULL; > > The warning is gone. > [..] Thanks! I've just checked these two patches on MSVC 2010, and it seems all warnings are gone: # HG changeset patch # User Valentin Bartenev # Date 1389735892 -14400 # Node ID 439d05a037a344ae8d38b162a98391f92321d03b # Parent e5fb14e850408b2250f81751b69d2f735bbe8edc SPDY: fixed build, broken by b7ee1bae0ffa. False positive warning about the "cl" variable may be uninitialized in the ngx_http_spdy_filter_get_data_frame() call was suppressed. It is always initialized either in the "while" cycle or in the following "if" condition since frame_size cannot be zero. diff -r e5fb14e85040 -r 439d05a037a3 src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Tue Jan 14 16:24:45 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 01:44:52 2014 +0400 @@ -665,6 +665,10 @@ ngx_http_spdy_send_chain(ngx_connection_ offset = 0; } +#if (NGX_SUPPRESS_WARN) + cl = NULL; +#endif + slcf = ngx_http_get_module_loc_conf(r, ngx_http_spdy_module); frame_size = (limit && limit <= (off_t) slcf->chunk_size) # HG changeset patch # User Valentin Bartenev # Date 1389740560 -14400 # Node ID 3d83b3f1354d7d56f1e31849bfa337f75d7b1d30 # Parent 439d05a037a344ae8d38b162a98391f92321d03b SPDY: fixed off_t/size_t type conversions on 32 bits platforms. diff -r 439d05a037a3 -r 3d83b3f1354d src/http/ngx_http_spdy_filter_module.c --- a/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 01:44:52 2014 +0400 +++ b/src/http/ngx_http_spdy_filter_module.c Wed Jan 15 03:02:40 2014 +0400 @@ -35,8 +35,7 @@ static ngx_inline ngx_int_t ngx_http_spd ngx_connection_t *fc, ngx_http_spdy_stream_t *stream); static ngx_chain_t *ngx_http_spdy_filter_get_shadow( - ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, size_t offset, - size_t size); + ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, off_t offset, off_t size); static ngx_http_spdy_out_frame_t *ngx_http_spdy_filter_get_data_frame( ngx_http_spdy_stream_t *stream, size_t len, ngx_chain_t *first, ngx_chain_t *last); @@ -702,7 +701,7 @@ ngx_http_spdy_send_chain(ngx_connection_ *ln = cl; ln = &cl->next; - rest -= size; + rest -= (size_t) size; in = in->next; if (in == NULL) { @@ -752,7 +751,7 @@ ngx_http_spdy_send_chain(ngx_connection_ } if (limit < (off_t) slcf->chunk_size) { - frame_size = limit; + frame_size = (size_t) limit; } } } @@ -777,7 +776,7 @@ ngx_http_spdy_send_chain(ngx_connection_ static ngx_chain_t * ngx_http_spdy_filter_get_shadow(ngx_http_spdy_stream_t *stream, ngx_buf_t *buf, - size_t offset, size_t size) + off_t offset, off_t size) { ngx_buf_t *chunk; ngx_chain_t *cl; From nginx-forum at nginx.us Wed Jan 15 03:53:25 2014 From: nginx-forum at nginx.us (xfeep) Date: Tue, 14 Jan 2014 22:53:25 -0500 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: <2a580d29d8829d51e76d640878d4c545.NginxMailingListEnglish@forum.nginx.org> References: <2a580d29d8829d51e76d640878d4c545.NginxMailingListEnglish@forum.nginx.org> Message-ID: <317f50083c2cea4359644e8892aa4972.NginxMailingListEnglish@forum.nginx.org> May be somebody wonder what the differences between Nginx UWsig Module and Nginx Clojure Module when integrating JVM. They are quite different. Nginx Clojure Module embed JVM instance into the Nginx Worker and they are in the same process and have the same memory address space. There 's no IPC cost when using Nginx Clojure Module and even no thread swith cost by the default way. Nginx UWsig Module can not avoid IPC cost or posix standard socket cost because Nginx Worker and the JVM process are different Process when using Nginx UWsig Module. Please correct me if there 's any mistake. xfeep Wrote: ------------------------------------------------------- > Hi! > > Nginx-Clojure Module Release 0.1.0 was out several day ago! > > It is a module for embedding Clojure or Java programs, typically > those Ring based handlers. > > It is an open source project hosted on Github, the site url is > https://github.com/xfeep/nginx-clojure > > With it we can develope high performance Clojure/Java Web App on Nginx > without any Java web server. > > By the way the result of simple performance test with Nginx-Clojure is > inspiring, more details can be got from > > https://github.com/ptaoussanis/clojure-web-server-benchmarks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,246456#msg-246456 From roberto at unbit.it Wed Jan 15 04:42:12 2014 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 15 Jan 2014 05:42:12 +0100 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: <317f50083c2cea4359644e8892aa4972.NginxMailingListEnglish@forum.nginx.org> References: <2a580d29d8829d51e76d640878d4c545.NginxMailingListEnglish@forum.nginx.org> <317f50083c2cea4359644e8892aa4972.NginxMailingListEnglish@forum.nginx.org> Message-ID: > May be somebody wonder what the differences between Nginx UWsig Module > and > Nginx Clojure Module when integrating JVM. > > They are quite different. Nginx Clojure Module embed JVM instance into the > Nginx Worker and they are in the same process and have the same memory > address space. > > There 's no IPC cost when using Nginx Clojure Module and even no thread > swith cost by the default way. Are you sure about it ? i see your code transfer requests data using a pipe (that is a very good approach indeed as avoid you to introduce blocking parts in nginx) What you mean for "thread switch cost" ? -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Wed Jan 15 05:31:57 2014 From: nginx-forum at nginx.us (xfeep) Date: Wed, 15 Jan 2014 00:31:57 -0500 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: References: Message-ID: <1ad413d4d62d984409bd1391f1651ed1.NginxMailingListEnglish@forum.nginx.org> With the default setting pipe is not used. Pipe is only used for enable jvm thread pool mode only when jvm_workers > 0 (jvm_workers default = 0). Further more pipe is never used to transfer the while request or response message. When under jvm thread pool mode, pipe is only used to transfer a event flag (only one pointer size)? ONLY IF you cann't resolve your performance problems by increasing worker_processes or reducing single request-response time, you can consider the way of setting jvm_workers > 0 which is not encouraged. Thread switch cost means Thread context switch cost. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,246458#msg-246458 From roberto at unbit.it Wed Jan 15 05:46:11 2014 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 15 Jan 2014 06:46:11 +0100 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: <1ad413d4d62d984409bd1391f1651ed1.NginxMailingListEnglish@forum.nginx.org> References: <1ad413d4d62d984409bd1391f1651ed1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <50b25f6a46a6bd2a00e32362cd11d303.squirrel@manage.unbit.it> > With the default setting pipe is not used. > > Pipe is only used for enable jvm thread pool mode only when jvm_workers > > 0 > (jvm_workers default = 0). > > Further more pipe is never used to transfer the while request or response > message. > When under jvm thread pool mode, pipe is only used to transfer a event > flag > (only one pointer size)? > > ONLY IF you cann't resolve your performance problems by increasing > worker_processes or reducing single request-response time, you can > consider > the way of setting jvm_workers > 0 which is not encouraged. > > Thread switch cost means Thread context switch cost. > Sorry, but are you saying that your suggested usage for concurrency is multiprocessing ? Multiprocessing is completely alien in the jvm (the vm is not even fork-friendly) and that would mean the nginx worker completly give control to the jvm and this is something bad as the jvm is a blocking vm. The jvm world is thread-centric (and trust me, i needed to deal with dozens of different threading implementations for uWSGI and the jvm is the best one) so i am quite doubtful saying "multiprocess by default" would be a successfull approach (well the amount of memory per-app will be huge) -- Roberto De Ioris http://unbit.it From renenglish at gmail.com Wed Jan 15 07:29:02 2014 From: renenglish at gmail.com (Shafreeck Sea) Date: Wed, 15 Jan 2014 15:29:02 +0800 Subject: Add proxy_next_upstream_action to distinguish diffrient network actions Message-ID: Hi all: The directive "proxy_next_upstream error timeout" takes effect on three network actions: connection, send and recieve. In practice ,we realy want to try next upstream according to in which actions we are. For example, I do not want to try next upstream if some error occurs or timed out when recieving response from upstream, otherwise it maybe duplicate my request . The proxy_next_upstream_action is involved to address this problem , the directive takes one or more parameter : conn, send, recv which indicates whether we should try next upstream. Usage: proxy_next_upstream error timeout; proxy_next_upstream_action conn; Try next upstream if error or timed out on connection. Anyone suggests ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 15 08:45:13 2014 From: nginx-forum at nginx.us (xfeep) Date: Wed, 15 Jan 2014 03:45:13 -0500 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: <50b25f6a46a6bd2a00e32362cd11d303.squirrel@manage.unbit.it> References: <50b25f6a46a6bd2a00e32362cd11d303.squirrel@manage.unbit.it> Message-ID: roberto Wrote: ------------------------------------------------------- > > Sorry, but are you saying that your suggested usage for concurrency is > multiprocessing ? > > Multiprocessing is completely alien in the jvm (the vm is not even > fork-friendly) and that would mean the nginx worker completly give > control > to the jvm and this is something bad as the jvm is a blocking vm. > > The jvm world is thread-centric (and trust me, i needed to deal with > dozens of different threading implementations for uWSGI and the jvm is > the > best one) so i am quite doubtful saying "multiprocess by default" > would be > a successfull approach (well the amount of memory per-app will be > huge) > > > -- > Roberto De Ioris > http://unbit.it > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx One Nginx worker , typically single thread, can handle thousands of request by non-blocking events. So we need less Nginx workers than the threads in a typical java server. Java thread is not lightweight and uses more memory than general thread. I don't think JVM is blocking vm. In my eye JVM just a executor of java class. By JNI we can any no-blocking things just as a general Nginx modula written by c. With the default mode in Nginx-Clojure 0.1.0 there 's one jvm instance embed per Nginx worker. The will be some no-blocking api provided by the next version of Nginx-Clojure. With Nginx-Clojure 0.1.0 you can also use one JVM instance with a thread pool if you like, just set nginx worker processes = 1 and set jvm_workers = the number of java threads you will use. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,246469#msg-246469 From nginx-forum at nginx.us Wed Jan 15 08:53:23 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 15 Jan 2014 03:53:23 -0500 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: <3792627.sgHauDpPRA@vbart-laptop> References: <3792627.sgHauDpPRA@vbart-laptop> Message-ID: I see only the '#if (NGX_SUPPRESS_WARN)' commit on hg, where are the others? can't attach anything here, after changeset 5516:439d05a037a3 additional changes are embedded here: http://nginx-win.ecsds.eu/ngx_http_spdy_filter_module.c Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246444,246470#msg-246470 From roberto at unbit.it Wed Jan 15 08:55:31 2014 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 15 Jan 2014 09:55:31 +0100 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: References: <50b25f6a46a6bd2a00e32362cd11d303.squirrel@manage.unbit.it> Message-ID: > roberto Wrote: > ------------------------------------------------------- >> >> Sorry, but are you saying that your suggested usage for concurrency is >> multiprocessing ? >> >> Multiprocessing is completely alien in the jvm (the vm is not even >> fork-friendly) and that would mean the nginx worker completly give >> control >> to the jvm and this is something bad as the jvm is a blocking vm. >> >> The jvm world is thread-centric (and trust me, i needed to deal with >> dozens of different threading implementations for uWSGI and the jvm is >> the >> best one) so i am quite doubtful saying "multiprocess by default" >> would be >> a successfull approach (well the amount of memory per-app will be >> huge) >> >> >> -- >> Roberto De Ioris >> http://unbit.it >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > One Nginx worker , typically single thread, can handle thousands of > request > by non-blocking events. So we need less Nginx workers than the threads in > a > typical java server. yes, for sure, the problem is that you need to integrate the whole jvm part with the nginx event loop, you cannot use other ways (unless you introduce back threads) This is what i told you in the clojure list: invest on implementing the nginx api as jni, otherwise all will remain a proof of concept. Just as an example, how do you plan to integrate with mysql in a non-blocking way ? you need a mysql adapter that will use the nginx api and so on... (huge work honestly) and yes, until you do not manage to integrate the nginx api in jni, the jvm is a blocking part for nginx :) I hate to bore you expecially because you are pushing a new technology and the last thing you need is losing entusiasm or being blasted, but the world is already full of people doing non-blocking programming in the wrong way :) -- Roberto De Ioris http://unbit.it From nginx-forum at nginx.us Wed Jan 15 09:29:41 2014 From: nginx-forum at nginx.us (xfeep) Date: Wed, 15 Jan 2014 04:29:41 -0500 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: References: Message-ID: Thanks for your good hints. I see there 's a good no-blocking mysql client Libdrizzle from https://launchpad.net/drizzle. With Nginx-lua or Openresty , Libdrizzle has been used in production. I won't hope every thing is no-blocking and I think it is unnecessary. Only when blocking hurt us we have to conside using no-blocking methods. For design of no-blocking api we can only use a simple callback pattern, let user to determine whether use threads or not. Even in Nginx-Clojure 0.1.0, developers can also use his owner thread or thread pool. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,246473#msg-246473 From roberto at unbit.it Wed Jan 15 09:33:32 2014 From: roberto at unbit.it (Roberto De Ioris) Date: Wed, 15 Jan 2014 10:33:32 +0100 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: References: Message-ID: > Thanks for your good hints. > > I see there 's a good no-blocking mysql client Libdrizzle from > https://launchpad.net/drizzle. > > With Nginx-lua or Openresty , Libdrizzle has been used in production. yes, both can be of great inspiration for you/your project > > I won't hope every thing is no-blocking and I think it is unnecessary. > unfortunataley (well, today is the second time i say thins thing ;) if you are developing in a non-blocking environment, all must be non-blocking without exceptions -- Roberto De Ioris http://unbit.it From vbart at nginx.com Wed Jan 15 09:56:14 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 15 Jan 2014 13:56:14 +0400 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: References: <3792627.sgHauDpPRA@vbart-laptop> Message-ID: <3367847.eJRM2hJkbF@vbart-laptop> On Wednesday 15 January 2014 03:53:23 itpp2012 wrote: > I see only the '#if (NGX_SUPPRESS_WARN)' commit on hg, where are the others? > can't attach anything here, after changeset 5516:439d05a037a3 additional > changes are embedded here: > http://nginx-win.ecsds.eu/ngx_http_spdy_filter_module.c > It was delayed for internal review. The first patch was posted for review even before you have complained about the problem. Now the second patch is committed too with expanded commit log: http://hg.nginx.org/nginx/rev/9d1479234f3c wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Jan 15 10:08:21 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 15 Jan 2014 05:08:21 -0500 Subject: src/http/ngx_http_spdy_filter_module.c, latest changesets compiler warnings In-Reply-To: <3367847.eJRM2hJkbF@vbart-laptop> References: <3367847.eJRM2hJkbF@vbart-laptop> Message-ID: <9fe0469eae7dc51a4ece3a8a73f7ccfd.NginxMailingListEnglish@forum.nginx.org> Aha, I need more patience :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246444,246478#msg-246478 From nginx-forum at nginx.us Wed Jan 15 10:23:01 2014 From: nginx-forum at nginx.us (xfeep) Date: Wed, 15 Jan 2014 05:23:01 -0500 Subject: Nginx-Clojure Module Release 0.1.0--Let Nginx embrace Clojure & Java In-Reply-To: References: Message-ID: <2534e93864c4e8c7e58356d95d17ce0d.NginxMailingListEnglish@forum.nginx.org> Thanks a lot! So far about the nginx-clojure project the most valuable adviceI have received is from you! Best regards! roberto Wrote: ------------------------------------------------------- > > Thanks for your good hints. > > > > I see there 's a good no-blocking mysql client Libdrizzle from > > https://launchpad.net/drizzle. > > > > With Nginx-lua or Openresty , Libdrizzle has been used in > production. > > yes, both can be of great inspiration for you/your project > > > > > I won't hope every thing is no-blocking and I think it is > unnecessary. > > > > unfortunataley (well, today is the second time i say thins thing ;) if > you > are developing in a non-blocking environment, all must be non-blocking > without exceptions > > -- > Roberto De Ioris > http://unbit.it > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246437,246480#msg-246480 From eirikur at nilsson.is Wed Jan 15 12:16:18 2014 From: eirikur at nilsson.is (=?ISO-8859-1?Q?Eir=EDkur_Nilsson?=) Date: Wed, 15 Jan 2014 12:16:18 +0000 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: <20140114225331.GO1835@mdounin.ru> References: <20140114225331.GO1835@mdounin.ru> Message-ID: Thanks for the suggestion. We're not specifying a event method currently, though I can't see if eventport was the default. If I explicitly use /dev/poll, and turn ssl_session_cache back on, the issue comes back. I have verified that both the HTTP request and response are proxied properly. It seems to me that when the upgrade is finished nginx enters direct tunneling mode for the websocket data, which doesn't work for some sockets, at least these recovered SSL sessions from iOS clients. The event method issue would have explained why I can't reproduce the issue on mac (with self signed cert). I also haven't reproduced it with an Android client, although I did not verify with tcpdump if my android test reused the SSL session. Any other ideas? - Eirikur On Tue, Jan 14, 2014 at 10:53 PM, Maxim Dounin wrote: > Hello! > > On Tue, Jan 14, 2014 at 04:22:44PM +0000, Eir?kur Nilsson wrote: > > > We've been debugging this issue for 3 days now and even though we have a > > temporary fix, we're still puzzled about it. > > > > There is an iOS app, which opens a websocket connection to our server > over > > SSL. Our server runs SmartOS and has nginx 1.5.0 (also happens on 1.4.1) > > proxying to a backend server running in NodeJS. > > > > To reproduce, I start my app, a websocket connection is established and > > works well, then I put the app to sleep for awhile until nginx kills the > > connection. When I reopen the app, the following happens: > > > > 1) App notices that the connection is dead and reconnects. > > 2) Behind the scenes, iOS reuses the SSL session from before and quickly > > opens a new socket. > > 3) A HTTP upgrade request and response flow across with no problems. > > 4) With a successful web-socket established on both sides, the client > > starts sending frames. However, none of these gets delivered to the > backend > > server. > > 5) After a minute, nginx kills the connection even though the client is > > sending periodic pings. > > 6) Back to 1. > > > > I haven't managed to reduce the test case or reproduce it in another > > environment yet. This only happens when using SSL. In wireshark I see the > > websocket frames being sent from the iPhone client and TCP acked > properly. > > > > What currently fixes the problem is to disable SSL session reuse in > nginx. > > Then every websocket connection works like it should. > > > > Here is the config before the fix: > > ### > > server { > > ### Server port and name ### > > [...] > > Which event method is used? If eventport, try switching to > /dev/poll instead (which is expected to be used by default on > SmartOS and other Solaris variants), it should fix the issue. The > eventport event method is known to have problems when proxying and > this may cause symptoms you see, it needs attention. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 15 15:37:29 2014 From: nginx-forum at nginx.us (bidwell) Date: Wed, 15 Jan 2014 10:37:29 -0500 Subject: imap connection to gmail closes connection Message-ID: I am running nginx 1.1.19 on an Ubuntu 12.04.4 64but server. I have nginx configured to enter on port 143 and go out to 127.0.0.1:143 where it goes through stunnel to go to imap.gmail.com:993. If I talk directly to 127.0.0.1:143 (to stunnel) it works. If I talk to nginx, it authenticates, logs correct username, target IP and port, gets the Capability list and registers a successful login to the remote (gmail) imap server and then closes the connection immediately. The following is a transcript of the telnet session: telnet nginx:143 * OK IMAP4 ready a1 LOGIN user at example.com password * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH a1 OK user at example.com first_name Last_name authenticated (Success) Connection closed by foreign host. My nginx error.log shows the following: *5 upstream sent invalid response: "* CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH a1 OK test at example.com Test User authenticated (Success)" while reading response from upstream,... It appears to not like google's CAPABILITY line. Is it too long? Any suggestions? Other connections through nginx/stunnel to exchange work just fine. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246483,246483#msg-246483 From mdounin at mdounin.ru Wed Jan 15 16:15:23 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Jan 2014 20:15:23 +0400 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: References: <20140114225331.GO1835@mdounin.ru> Message-ID: <20140115161523.GU1835@mdounin.ru> Hello! On Wed, Jan 15, 2014 at 12:16:18PM +0000, Eir?kur Nilsson wrote: > Thanks for the suggestion. > > We're not specifying a event method currently, though I can't see if > eventport was the default. If I explicitly use /dev/poll, and turn > ssl_session_cache back on, the issue comes back. > > I have verified that both the HTTP request and response are proxied > properly. It seems to me that when the upgrade is finished nginx enters > direct tunneling mode for the websocket data, which doesn't work for some > sockets, at least these recovered SSL sessions from iOS clients. > > The event method issue would have explained why I can't reproduce the issue > on mac (with self signed cert). I also haven't reproduced it with an > Android client, although I did not verify with tcpdump if my android test > reused the SSL session. > > Any other ideas? It might be helpfull to see debug log and a tcpdump. See also http://wiki.nginx.org/Debugging for hints. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jan 15 16:51:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Jan 2014 20:51:57 +0400 Subject: imap connection to gmail closes connection In-Reply-To: References: Message-ID: <20140115165157.GY1835@mdounin.ru> Hello! On Wed, Jan 15, 2014 at 10:37:29AM -0500, bidwell wrote: > I am running nginx 1.1.19 on an Ubuntu 12.04.4 64but server. > > I have nginx configured to enter on port 143 and go out to 127.0.0.1:143 > where it goes through stunnel to go to imap.gmail.com:993. If I talk > directly to 127.0.0.1:143 (to stunnel) it works. If I talk to nginx, it > authenticates, logs correct username, target IP and port, gets the > Capability list and registers a successful login to the remote (gmail) imap > server and then closes the connection immediately. The following is a > transcript of the telnet session: > > telnet nginx:143 > * OK IMAP4 ready > a1 LOGIN user at example.com password > * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN > X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH > a1 OK user at example.com first_name Last_name authenticated (Success) > Connection closed by foreign host. > > My nginx error.log shows the following: > *5 upstream sent invalid response: "* CAPABILITY IMAP4rev1 UNSELECT IDLE > NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE > MOVE CONDSTORE ESEARCH > a1 OK test at example.com Test User authenticated (Success)" while reading > response from upstream,... > > It appears to not like google's CAPABILITY line. Is it too long? Any > suggestions? > > Other connections through nginx/stunnel to exchange work just fine. The problem is that nginx doesn't expect multiple responses to the LOGIN command. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jan 15 16:56:30 2014 From: nginx-forum at nginx.us (ariesone) Date: Wed, 15 Jan 2014 11:56:30 -0500 Subject: Fast CGI module "multipart/mixed" problem (it only accepts 1 "Content-Type" header) Message-ID: <64b2d2a2fafe533451e775d4f44f0e42.NginxMailingListEnglish@forum.nginx.org> It seems, when my FCGI server responds to NGINX with "Status: 200 OK\r\nContent-Type: multipart/mixed;boundary=whatever\r\n\r\nboundary=whatever\r\nContent-Type: image/jpeg\r\n\r\n" The FASTCGI module is taking the 2nd "Content-Type" only and uses it in the initial response with the 200. The client gets confused when it sees the boundaries and data later. If I remove the subsequent "Content-Type:" headers, the initial one with the boundary indicator is sent; however, the client now does not know how to interpret the . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246487,246487#msg-246487 From nginx-forum at nginx.us Wed Jan 15 17:02:55 2014 From: nginx-forum at nginx.us (bidwell) Date: Wed, 15 Jan 2014 12:02:55 -0500 Subject: imap connection to gmail closes connection In-Reply-To: <20140115165157.GY1835@mdounin.ru> References: <20140115165157.GY1835@mdounin.ru> Message-ID: <8c66b55e2f3e2b26a2a81ad1e732e325.NginxMailingListEnglish@forum.nginx.org> Is there a work around? A google setting maybe? Is anyone else using nginx to map to imap.gmail.com? How do they get it working? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246483,246488#msg-246488 From mdounin at mdounin.ru Wed Jan 15 18:00:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 15 Jan 2014 22:00:50 +0400 Subject: imap connection to gmail closes connection In-Reply-To: <20140115172818.GX19242@aart.rice.edu> References: <20140115165157.GY1835@mdounin.ru> <20140115172818.GX19242@aart.rice.edu> Message-ID: <20140115180050.GA1835@mdounin.ru> Hello! On Wed, Jan 15, 2014 at 11:28:18AM -0600, ktm at rice.edu wrote: > On Wed, Jan 15, 2014 at 08:51:57PM +0400, Maxim Dounin wrote: > > Hello! > > > > On Wed, Jan 15, 2014 at 10:37:29AM -0500, bidwell wrote: > > > > > I am running nginx 1.1.19 on an Ubuntu 12.04.4 64but server. > > > > > > I have nginx configured to enter on port 143 and go out to 127.0.0.1:143 > > > where it goes through stunnel to go to imap.gmail.com:993. If I talk > > > directly to 127.0.0.1:143 (to stunnel) it works. If I talk to nginx, it > > > authenticates, logs correct username, target IP and port, gets the > > > Capability list and registers a successful login to the remote (gmail) imap > > > server and then closes the connection immediately. The following is a > > > transcript of the telnet session: > > > > > > telnet nginx:143 > > > * OK IMAP4 ready > > > a1 LOGIN user at example.com password > > > * CAPABILITY IMAP4rev1 UNSELECT IDLE NAMESPACE QUOTA ID XLIST CHILDREN > > > X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE MOVE CONDSTORE ESEARCH > > > a1 OK user at example.com first_name Last_name authenticated (Success) > > > Connection closed by foreign host. > > > > > > My nginx error.log shows the following: > > > *5 upstream sent invalid response: "* CAPABILITY IMAP4rev1 UNSELECT IDLE > > > NAMESPACE QUOTA ID XLIST CHILDREN X-GM-EXT-1 UIDPLUS COMPRESS=DEFLATE ENABLE > > > MOVE CONDSTORE ESEARCH > > > a1 OK test at example.com Test User authenticated (Success)" while reading > > > response from upstream,... > > > > > > It appears to not like google's CAPABILITY line. Is it too long? Any > > > suggestions? > > > > > > Other connections through nginx/stunnel to exchange work just fine. > > > > The problem is that nginx doesn't expect multiple responses to the > > LOGIN command. > > > > -- > > Maxim Dounin > > http://nginx.org/ > > > > Hi Maxim, > > I tried posting to the list but it never came through. You have to be subscribed to the list to post to it, see http://nginx.org/en/support.html. > Here is the patch > we found in the nginx archives to address this problem. [...] Thanks, it looks like the patch from this message: http://mailman.nginx.org/pipermail/nginx/2007-November/002269.html Posting the link on the list in case it will be usable for someone. Unfortunately, the patch is more like a quick-and-dirty workaround, and needs more work before it can be committed. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed Jan 15 19:18:32 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 15 Jan 2014 23:18:32 +0400 Subject: PHP below server root not served In-Reply-To: <52D26E0B.7030404@bsdbox.co> References: <20140110153430.GI19804@craic.sysops.org> <52D26E0B.7030404@bsdbox.co> Message-ID: <1766459.nZ62PzKzgo@vbart-laptop> On Sunday 12 January 2014 21:27:23 nano wrote: [..] > Another presumption on my part, however, where is the nginx regex > documentation? I cannot seem to find it or even what syntax nginx uses. Nginx uses PCRE. The documentation should be available in your system: $ man pcrepattern $ man pcresyntax wbr, Valentin V. Bartenev From eirikur at nilsson.is Wed Jan 15 19:30:20 2014 From: eirikur at nilsson.is (=?ISO-8859-1?Q?Eir=EDkur_Nilsson?=) Date: Wed, 15 Jan 2014 19:30:20 +0000 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: <20140115161523.GU1835@mdounin.ru> References: <20140114225331.GO1835@mdounin.ru> <20140115161523.GU1835@mdounin.ru> Message-ID: Hey! On Wed, Jan 15, 2014 at 4:15 PM, Maxim Dounin wrote: > > > It might be helpfull to see debug log and a tcpdump. See also > http://wiki.nginx.org/Debugging for hints. Debug log: http://cl.ly/142F2s2M0b2S tcpdump: http://cl.ly/2K3D2F1X0t0n (only contains traffic between iOS and nginx) This reproduction has nginx 1.5.8 running with SSL on port 4443 using /dev/poll. It's running on a new-ish smartmachine instance from Joyent. it gets two websocket connections: * The first at 18:43:52 is a new SSL session and works correctly, with traffic visible in tcpdump and debug log. * Second at 18:45:30 reuses the other SSL session but doesn't work, traffic can be seen in tcpdump but not in debug log. I find it very weird that there isn't a single debug message from nginx after it switches protocols for the second connection until I stop the nginx after the test is finished. I'm no closer. Thanks so much for your interest. - Eirikur -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Jan 16 04:27:52 2014 From: nginx-forum at nginx.us (e.schreiber) Date: Wed, 15 Jan 2014 23:27:52 -0500 Subject: Is it possible to rewrite links on a webpage into an other format? Message-ID: Hello Everyone! Is it possible to rewrite links on a webpage with the format "domain/file" into the format "file.domain"? Please can anyone help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246497,246497#msg-246497 From nginx-forum at nginx.us Thu Jan 16 04:58:16 2014 From: nginx-forum at nginx.us (e.schreiber) Date: Wed, 15 Jan 2014 23:58:16 -0500 Subject: Is it possible to rewrite links on a webpage into an other format? In-Reply-To: References: Message-ID: Sorry, no answer necessary, I not will use such something. Many Greetings Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246497,246498#msg-246498 From nginx-forum at nginx.us Thu Jan 16 07:04:45 2014 From: nginx-forum at nginx.us (microwish) Date: Thu, 16 Jan 2014 02:04:45 -0500 Subject: question on some simple codes in ngx_buf.c Message-ID: Hello there, code snippet in the definition of ngx_chain_add_copy in ngx_buf.c: ll = chain; for (cl = *chain; cl; cl = cl->next) { ll = &cl->next; } Why is ll assigned repeatedly? I'm sorry for failed thinking out any necessity. And I modified the above as the following. Is it OK? if (*chain) { for (cl = *chain; cl->next; cl = cl->next) { /* void */ } ll = &cl->next; } else { ll = chain; } Thank you very much. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246499,246499#msg-246499 From nginx-forum at nginx.us Thu Jan 16 07:47:30 2014 From: nginx-forum at nginx.us (MasterMind) Date: Thu, 16 Jan 2014 02:47:30 -0500 Subject: Images Aren't Displaying When Perl Interpreter Is Enabled Message-ID: <3a0ab7314d3bbb656dac06dad67fff50.NginxMailingListEnglish@forum.nginx.org> I have awstats set up and working with Nginx and perl but all images return a 404 error. The virtual host config is identical to other websites where images work fine except for the added part for perl. I think i know what's happening but i dont know how to fix it; images are being sent to the perl interpreter instead of Nginx. Here's my config: server { listen 1.2.3.4:80; server_name stats.example.com; rewrite ^ https://$server_name$request_uri? permanent; } server { listen 1.2.3.4:443; server_name stats.example.com; access_log /path/to/logs/stats/access.log; error_log /path/to/logs/stats/error.log; index awstats.pl index.html index.htm; client_max_body_size 40M; ssl on; ssl_certificate /etc/nginx/ssl/ssl.crt; ssl_certificate_key /etc/nginx/ssl/private.key; location / { root /path/to/the/awstats/wwwr; index index.html index.htm; try_files $uri $uri/ /index.html?$uri&$args; auth_basic "Restricted"; auth_basic_user_file /path/to/the/awstats/htpasswd; } # Block Image Hotlinking location /icon/ { valid_referers none blocked stats.example.com; if ($invalid_referer) { return 403; } } # Dynamic stats. location ~ \.pl$ { gzip off; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:8999; fastcgi_index index.pl; fastcgi_param SCRIPT_FILENAME /path/to/the/awstats/wwwroot/$fastcgi_script_name; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246500,246500#msg-246500 From contact at jpluscplusm.com Thu Jan 16 09:11:33 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Thu, 16 Jan 2014 09:11:33 +0000 Subject: Images Aren't Displaying When Perl Interpreter Is Enabled In-Reply-To: <3a0ab7314d3bbb656dac06dad67fff50.NginxMailingListEnglish@forum.nginx.org> References: <3a0ab7314d3bbb656dac06dad67fff50.NginxMailingListEnglish@forum.nginx.org> Message-ID: At first, pre-coffee glance, I suspect people will be better placed to help you if you provide some examples (redacted if necessary) of URIs that work and URIs that don't work ... From renenglish at gmail.com Thu Jan 16 09:19:59 2014 From: renenglish at gmail.com (=?GB2312?B?yM7Twsir?=) Date: Thu, 16 Jan 2014 17:19:59 +0800 Subject: Does it possible to submit duplicated request with the proxy_next_upstream on In-Reply-To: References: Message-ID: <54127272-C95E-4CC8-BEEA-D95DD2F2580C@gmail.com> Sorry I can?t get it . If host A has added the counter and failed to response, the request would be failed over to host B with successful response, so the counter would be added twines. Wouldn?t it ? ? 2014?1?14????5:48?itpp2012 ??? > Unless the request is getting que'd while there is a short wait for host A > to get online AND fail-over is also happening, its not likely to be added > twice. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245979,246388#msg-246388 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Jan 16 12:00:03 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 16 Jan 2014 07:00:03 -0500 Subject: Does it possible to submit duplicated request with the proxy_next_upstream on In-Reply-To: <54127272-C95E-4CC8-BEEA-D95DD2F2580C@gmail.com> References: <54127272-C95E-4CC8-BEEA-D95DD2F2580C@gmail.com> Message-ID: <123db0bd27237e4ffcac80d3e8b76f57.NginxMailingListEnglish@forum.nginx.org> renenglish Wrote: ------------------------------------------------------- > Sorry I can?t get it . > > If host A has added the counter and failed to response, the request > would be failed over to host B with successful response, so the > counter would be added twines. Wouldn?t it ? Then a condition must occur where host A fails right after processing the request, they usually fail before accepting a request, it also depends on the timeout for a request. And it also depends what nginx considers a fail, nginx might not fail a host when it does not return from accepting a request. You will have to simulate this to find out the conditions for a fail. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,245979,246505#msg-246505 From david at watzke.cz Thu Jan 16 12:33:17 2014 From: david at watzke.cz (David Watzke) Date: Thu, 16 Jan 2014 13:33:17 +0100 Subject: proxy_cache purge details Message-ID: <52D7D18D.3050404@watzke.cz> Hello, we've got a problem with the proxy_cache feature in nginx. To be more precise, the problem occurs when the cache loader kicks in and starts deleting the expired files that are stored on a LVM-striped (non-raid) ext4 partition across six huge SSD disks. The purge (sometimes?) takes ages and it completely kills the reads from the partition. That's hardly an nginx issue, but it is why we would like to know if there's a posibility to force the cache purge so that small amounts of files get deleted more often rather than a lot of files at once getting deleted less often. Also, it would help us to know just exactly how (and where) does the nginx store the last-access-time information for each file (for the 'inactive' feature in proxy_cache_path directive), if the atime feature is off for performance reasons. I'm guessing that it needs to store this so that it knows when to delete the files. It's quite difficult for us to find it in the sources, so if you could point us in the right direction, it would be awesome! Thanks, David From nginx-forum at nginx.us Thu Jan 16 14:02:36 2014 From: nginx-forum at nginx.us (rge3) Date: Thu, 16 Jan 2014 09:02:36 -0500 Subject: A 503 page gets written to my proxy cache, overwriting the 200 Message-ID: <0a0aaf2846a70109b3f234aeec98a8bd.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to use the proxy cache to store regular pages (200) from my web server so that when the web server goes into maintenance mode and starts returning 503 nginx can still serve the good page out of cache. It works great for a few minutes but then at some point (5 to 10 minutes in) nginx will overwrite the good 200 page in the cache with the bad 503 page and then start handing out the 503. Looking at my config I don't understand how a 503 could ever get written to cache but it is. And the 200 page was brand new (written 10 minutes before) so it shouldn't be the "inactive" time on the proxy_cache_path setting causing nginx to delete the good file. Can anyone tell me what I'm missing? Here are the relevant pieces of my config: proxy_cache_path /var/www/cache levels=1:2 keys_zone=my-cache:500m max_size=3000m inactive=120h; proxy_temp_path /var/www/cache/tmp; proxy_cache_key "$scheme$host$request_uri"; map $http_cookie $GotSessionCookie { default ""; "~(?P\bSESS[^;=]+=[^;=]+)" $sessionid; } server { listen 80; server_name _; proxy_cache my-cache; location / { proxy_pass http://production; proxy_cache_valid 200 301 302 30m; proxy_cache_valid 404 1m; } # don't cache pages with php's session cookie proxy_no_cache $cookie_$GotSessionCookie; # bypass the cache if we get a X-NoCache header proxy_cache_bypass $http_nocache $cookie_$GotSessionCookie; proxy_cache_use_stale http_500 http_503 error timeout invalid_header updating; } I can't imagine how a 503 would ever get cached given those proxy_cache_valid lines but maybe I don't understand something. Thanks for any ideas! -Rick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246507,246507#msg-246507 From mdounin at mdounin.ru Thu Jan 16 15:58:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Jan 2014 19:58:37 +0400 Subject: Fast CGI module "multipart/mixed" problem (it only accepts 1 "Content-Type" header) In-Reply-To: <64b2d2a2fafe533451e775d4f44f0e42.NginxMailingListEnglish@forum.nginx.org> References: <64b2d2a2fafe533451e775d4f44f0e42.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140116155837.GD1835@mdounin.ru> Hello! On Wed, Jan 15, 2014 at 11:56:30AM -0500, ariesone wrote: > It seems, when my FCGI server responds to NGINX with "Status: 200 > OK\r\nContent-Type: > multipart/mixed;boundary=whatever\r\n\r\nboundary=whatever\r\nContent-Type: > image/jpeg\r\n\r\n" > > The FASTCGI module is taking the 2nd "Content-Type" only and uses it in the > initial response with the 200. > > The client gets confused when it sees the boundaries and data later. > If I remove the subsequent "Content-Type:" headers, the initial one with the > boundary indicator is sent; however, the client now does not know how to > interpret the . The second Content-Type header is expected to be in response body from nginx point of view (i.e., after double CRLF which marks end of headers), and nginx shouldn't try to interpret it anyhow. While string you provided looks correct at a first glance, it may be incorrect depending on language and OS used (e.g., in Perl on Windows it will likely produce wrong results due to "\n" expanded to CRLF). Symptoms suggest this is likely the cause and you should focus on what your app actually returns. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 16 16:06:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Jan 2014 20:06:45 +0400 Subject: A 503 page gets written to my proxy cache, overwriting the 200 In-Reply-To: <0a0aaf2846a70109b3f234aeec98a8bd.NginxMailingListEnglish@forum.nginx.org> References: <0a0aaf2846a70109b3f234aeec98a8bd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140116160645.GE1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 09:02:36AM -0500, rge3 wrote: > Hi, > > I'm trying to use the proxy cache to store regular pages (200) from my web > server so that when the web server goes into maintenance mode and starts > returning 503 nginx can still serve the good page out of cache. It works > great for a few minutes but then at some point (5 to 10 minutes in) nginx > will overwrite the good 200 page in the cache with the bad 503 page and then > start handing out the 503. Looking at my config I don't understand how a > 503 could ever get written to cache but it is. And the 200 page was brand > new (written 10 minutes before) so it shouldn't be the "inactive" time on > the proxy_cache_path setting causing nginx to delete the good file. Can > anyone tell me what I'm missing? Here are the relevant pieces of my > config: [...] > # don't cache pages with php's session cookie > proxy_no_cache $cookie_$GotSessionCookie; > > # bypass the cache if we get a X-NoCache header > proxy_cache_bypass $http_nocache $cookie_$GotSessionCookie; > > proxy_cache_use_stale http_500 http_503 error timeout > invalid_header updating; > } > > I can't imagine how a 503 would ever get cached given those > proxy_cache_valid lines but maybe I don't understand something. Thanks for > any ideas! An exiting cache can be bypassed due to proxy_cache_bypass in your config, and 503 response can be cached if it contains Cache-Control and/or Expires which allow caching. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 16 16:50:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Jan 2014 20:50:26 +0400 Subject: question on some simple codes in ngx_buf.c In-Reply-To: References: Message-ID: <20140116165026.GF1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 02:04:45AM -0500, microwish wrote: > Hello there, > > code snippet in the definition of ngx_chain_add_copy in ngx_buf.c: > > > ll = chain; > > > > for (cl = *chain; cl; cl = cl->next) { > ll = &cl->next; > } > > > Why is ll assigned repeatedly? I'm sorry for failed thinking out any > necessity. > > And I modified the above as the following. Is it OK? > > > if (*chain) { > for (cl = *chain; cl->next; cl = cl->next) { /* void */ } > ll = &cl->next; > } else { > ll = chain; > } > > > Thank you very much. The code snippets look equivalent from logical point of view. >From performance point of view - they are mostly equivalent too, as cl->next address anyway needs to be loaded on each cycle iteration and will be available in a register, so assignment is essentially a nop. The code currently used is shorter though, and easier to read. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 16 17:02:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 16 Jan 2014 21:02:31 +0400 Subject: proxy_cache purge details In-Reply-To: <52D7D18D.3050404@watzke.cz> References: <52D7D18D.3050404@watzke.cz> Message-ID: <20140116170231.GG1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 01:33:17PM +0100, David Watzke wrote: > Hello, > > we've got a problem with the proxy_cache feature in nginx. To be more > precise, the problem occurs when the cache loader kicks in and starts > deleting the expired files that are stored on a LVM-striped (non-raid) ext4 > partition across six huge SSD disks. The purge (sometimes?) takes ages and > it completely kills the reads from the partition. > > That's hardly an nginx issue, but it is why we would like to know if there's > a posibility to force the cache purge so that small amounts of files get > deleted more often rather than a lot of files at once getting deleted less > often. > > Also, it would help us to know just exactly how (and where) does the nginx > store the last-access-time information for each file (for the 'inactive' > feature in proxy_cache_path directive), if the atime feature is off for > performance reasons. I'm guessing that it needs to store this so that it > knows when to delete the files. It's quite difficult for us to find it in > the sources, so if you could point us in the right direction, it would be > awesome! Last access information, as used to track cache items inactivity, is stored in cache memory zone only. Or, more precisely, derived "expire" value is stored in nodes. If cache is reloaded from disk, last access is assumed to be at the time of loading cache item by the loader. It is strange that removing inactive cache items causes problems (there shouldn't be many at any given moment, event after reload, because loading is limited by loader_files, loader_sleep, loader_threshold parameters of the proxy_cache_path directive). But if it is, it's mostly the ngx_http_file_cache_expire() function you should look into. -- Maxim Dounin http://nginx.org/ From multiformeingegno at gmail.com Thu Jan 16 19:19:30 2014 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Thu, 16 Jan 2014 20:19:30 +0100 Subject: fastcgi_cache_path empty In-Reply-To: References: Message-ID: No one? :) 2014/1/13 Lorenzo Raffio > I wanted to try fastcgi_cache on my nginx 1.5.8 as shown here > http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php > > In nginx conf, http section, I added: > > fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m > max_size=1000m inactive=60m; > > In server section: > set $cache_uri $request_uri; > > # POST requests and urls with a query string should always go to PHP > if ($request_method = POST) { > set $cache_uri 'null cache'; > } > if ($query_string != "") { > set $cache_uri 'null cache'; > } > > # Don't cache uris containing the following segments > if ($request_uri ~* > "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") > { > set $cache_uri 'null cache'; > } > > # Don't use the cache for logged in users or recent commenters > if ($http_cookie ~* > "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { > set $cache_uri 'null cache'; > } > > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > include fastcgi.conf; > fastcgi_pass unix:/var/run/php5-fpm.sock; > > > ## > # Fastcgi cache > ## > set $skip_cache 1; > if ($cache_uri != "null cache") { > add_header X-Cache-Debug "$cache_uri $cookie_nocache > $arg_nocache$arg_comment $http_pragma $http_authorization"; > set $skip_cache 0; > } > fastcgi_cache_bypass $skip_cache; > fastcgi_cache_key > $scheme$host$request_uri$request_method; > fastcgi_cache_valid any 8m; > fastcgi_cache_bypass $http_pragma; > fastcgi_cache_use_stale updating error timeout > invalid_header http_500; > > } > > I chowned /var/cache/nginx to www-data user (and group) and chmodded it to > 775. > I restarted nginx but the folder is always empty. Is it normal? How can I > test if fastcgi_cache is working? > > Thanks in advance > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Thu Jan 16 19:20:57 2014 From: lists at ruby-forum.com (Gabriel Arrais) Date: Thu, 16 Jan 2014 20:20:57 +0100 Subject: HttpUseridModule In-Reply-To: References: Message-ID: Flavio, I'm trying to do the same as you. Have you found any solution? I was thinking in change the expiration by myself setting the cookie again, but I think that this is ugly =/ Thank you, Gabriel Arrais -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Thu Jan 16 19:50:41 2014 From: nginx-forum at nginx.us (rge3) Date: Thu, 16 Jan 2014 14:50:41 -0500 Subject: A 503 page gets written to my proxy cache, overwriting the 200 In-Reply-To: <20140116160645.GE1835@mdounin.ru> References: <20140116160645.GE1835@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > An exiting cache can be bypassed due to proxy_cache_bypass in your > config, and 503 response can be cached if it contains > Cache-Control and/or Expires which allow caching. Oh, I hadn't thought of that part about the bypass. Okay, that makes sense. But what about the Cache-Control/Expires causing it to cache the 503... that can still happen even if my "proxy_cache_valid" line doesn't list a 503? -Rick Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246507,246517#msg-246517 From steve at greengecko.co.nz Thu Jan 16 20:08:43 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Fri, 17 Jan 2014 09:08:43 +1300 Subject: fastcgi_cache_path empty In-Reply-To: References: Message-ID: <1389902923.10678.126.camel@steve-new> I think ( it's hard to read ) that you're not telling the site which http cache to use ( fastcgi_cache microcache; ). On my systems ( I'm looking at an Amazon one so RH based ), /var/cache/nginx is already in use. I set up /var/cache/nginx_fastcgi instead, which is not empty. (1GB is way too big btw). I created a file /etc/nginx/microcache, containing --8<-- # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set if ($http_cookie ~* "_mcnc") { set $no_cache "1"; } # Bypass cache if flag is set fastcgi_no_cache $no_cache; fastcgi_cache_bypass $no_cache; fastcgi_cache microcache; fastcgi_cache_key "$scheme$request_method$host$request_uri $http_if_modified_since$http_if_none_match"; fastcgi_cache_valid 404 30m; fastcgi_cache_valid 200 10s; fastcgi_max_temp_file_size 1M; fastcgi_cache_use_stale updating; fastcgi_pass_header Set-Cookie; fastcgi_pass_header Cookie; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; --8<-- (this only stores for 10 seconds BTW, and the cache key is designed for a magento site). And included it in the location blocks that would use it, ie location ~* \.php$ This way the config becomes much more readable. hth, Steve On Thu, 2014-01-16 at 20:19 +0100, Lorenzo Raffio wrote: > No one? :) > > > > 2014/1/13 Lorenzo Raffio > I wanted to try fastcgi_cache on my nginx 1.5.8 as shown here > http://seravo.fi/2013/optimizing-web-server-performance-with-nginx-and-php > > > In nginx conf, http section, I added: > > fastcgi_cache_path /var/cache/nginx levels=1:2 > keys_zone=microcache:10m max_size=1000m inactive=60m; > > > In server section: > set $cache_uri $request_uri; > > # POST requests and urls with a query string should always > go to PHP > if ($request_method = POST) { > set $cache_uri 'null cache'; > } > if ($query_string != "") { > set $cache_uri 'null cache'; > } > > # Don't cache uris containing the following segments > if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app| > cron|login|register|mail).php|wp-.*.php|/feed/|index.php| > wp-comments-popup.php|wp-links-opml.php|wp-locations.php| > sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { > set $cache_uri 'null cache'; > } > > # Don't use the cache for logged in users or recent > commenters > if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+| > wp-postpass|wordpress_logged_in") { > set $cache_uri 'null cache'; > } > > location ~ \.php$ { > try_files $uri =404; > fastcgi_split_path_info ^(.+\.php)(/.+)$; > include fastcgi.conf; > fastcgi_pass unix:/var/run/php5-fpm.sock; > > > ## > # Fastcgi cache > ## > set $skip_cache 1; > if ($cache_uri != "null cache") { > add_header X-Cache-Debug "$cache_uri > $cookie_nocache $arg_nocache$arg_comment $http_pragma > $http_authorization"; > set $skip_cache 0; > } > fastcgi_cache_bypass $skip_cache; > fastcgi_cache_key $scheme$host$request_uri > $request_method; > fastcgi_cache_valid any 8m; > fastcgi_cache_bypass $http_pragma; > fastcgi_cache_use_stale updating error timeout > invalid_header http_500; > > } > > I chowned /var/cache/nginx to www-data user (and group) and > chmodded it to 775. > > I restarted nginx but the folder is always empty. Is it > normal? How can I test if fastcgi_cache is working? > > Thanks in advance > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From lists at ruby-forum.com Thu Jan 16 20:41:25 2014 From: lists at ruby-forum.com (Gabriel Arrais) Date: Thu, 16 Jan 2014 21:41:25 +0100 Subject: Errors using HttpUseridModule Message-ID: Hi guys, I'm using the HttpUseridModule for storing session ids of our users. We're receiving a lot of errors lately concerning the format of the userid cookie. Basically there are two types of errors: [error] 1581#0: *20523638 client sent invalid userid cookie "sid="Cvwk2lLYLvhh3gYtDscPAg=="; $Path="/"" while reading response header from upstream, client: xx.xx.xx.xx, server: , request: "GET /xxxx/xxxx HTTP/1.1", upstream: "http://xxxxxxxxxxxxxx", host: "xxxxxxxxxxxxxxxxx" and [error] 1582#0: *17018740 client sent too short userid cookie "sid=Cvwkcept: */*", client: xx.xx.xx.xx, server: xxxxxxxxx, request: "GET /xxxxxx HTTP/1.0", host: "xxxxxxx", referrer: "http://xxxxxxxxxxx" And I'm using this configuration for userid userid on; userid_name sid; userid_expires 31d; userid_path /; Can you help me? Thank you in advance, Gabriel Arrais -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Thu Jan 16 21:37:01 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 16 Jan 2014 21:37:01 +0000 Subject: Errors using HttpUseridModule In-Reply-To: References: Message-ID: <20140116213701.GQ19804@craic.sysops.org> On Thu, Jan 16, 2014 at 09:41:25PM +0100, Gabriel Arrais wrote: Hi there, > We're receiving a lot of errors lately concerning the format of the > userid cookie. Is there any pattern to the errors that you can see? Are they all coming from a particular browser version, for example? > [error] 1581#0: *20523638 client sent invalid userid cookie > "sid="Cvwk2lLYLvhh3gYtDscPAg=="; $Path="/"" while reading response It looks to me like nginx does not expect the quotes after the =. They are unnecessary, but I think that they are possibly allowed by rfc 2109. If they are allowed, then nginx should probably be changed to accept them. If they are not, then the client should probably be changed not to send them. > [error] 1582#0: *17018740 client sent too short userid cookie > "sid=Cvwkcept: */*", client: xx.xx.xx.xx, server: xxxxxxxxx, request: It looks to me like something -- client, server, or just the display -- has gotten confused and mixed together the Cookie: header with probably an Accept: header. I don't think there's much that can be done about this, apart from try to identify the culprit and see if it is something repeatable and fixable. If it is an identifiable problem in nginx, then I'm sure there'll be interest in fixing it. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu Jan 16 22:01:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 02:01:11 +0400 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: References: <20140114225331.GO1835@mdounin.ru> <20140115161523.GU1835@mdounin.ru> Message-ID: <20140116220111.GH1835@mdounin.ru> Hello! On Wed, Jan 15, 2014 at 07:30:20PM +0000, Eir?kur Nilsson wrote: > Hey! > > On Wed, Jan 15, 2014 at 4:15 PM, Maxim Dounin wrote: > > > > > > It might be helpfull to see debug log and a tcpdump. See also > > http://wiki.nginx.org/Debugging for hints. > > > Debug log: http://cl.ly/142F2s2M0b2S > tcpdump: http://cl.ly/2K3D2F1X0t0n (only contains traffic between iOS and > nginx) > > This reproduction has nginx 1.5.8 running with SSL on port 4443 using > /dev/poll. It's running on a new-ish smartmachine instance from Joyent. it > gets two websocket connections: > > * The first at 18:43:52 is a new SSL session and works correctly, with > traffic visible in tcpdump and debug log. > * Second at 18:45:30 reuses the other SSL session but doesn't work, traffic > can be seen in tcpdump but not in debug log. > > I find it very weird that there isn't a single debug message from nginx > after it switches protocols for the second connection until I stop the > nginx after the test is finished. I'm no closer. Please try the following patch: --- a/src/http/ngx_http_upstream.c +++ b/src/http/ngx_http_upstream.c @@ -2565,6 +2565,16 @@ ngx_http_upstream_upgrade(ngx_http_reque { ngx_http_upstream_process_upgraded(r, 0, 1); } + + if (ngx_handle_read_event(c->read, 0) != NGX_OK) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; + } + + if (ngx_handle_read_event(u->peer.connection->read, 0) != NGX_OK) { + ngx_http_upstream_finalize_request(r, u, NGX_ERROR); + return; + } } -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 16 22:08:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 02:08:16 +0400 Subject: A 503 page gets written to my proxy cache, overwriting the 200 In-Reply-To: References: <20140116160645.GE1835@mdounin.ru> Message-ID: <20140116220816.GI1835@mdounin.ru> Hello! On Thu, Jan 16, 2014 at 02:50:41PM -0500, rge3 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > > An exiting cache can be bypassed due to proxy_cache_bypass in your > > config, and 503 response can be cached if it contains > > Cache-Control and/or Expires which allow caching. > > Oh, I hadn't thought of that part about the bypass. Okay, that makes > sense. > But what about the Cache-Control/Expires causing it to cache the 503... > that can > still happen even if my "proxy_cache_valid" line doesn't list a 503? The "proxy_cache_valid" directives are used if there are no Cache-Control/Expires to allow caching (or they are ignored with proxy_ignore_headers). That is, with "proxy_cache_valid" you can cache something which isn't normally cached, but they don't prevent caching of other responses. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Thu Jan 16 22:13:00 2014 From: lists at ruby-forum.com (Gabriel Arrais) Date: Thu, 16 Jan 2014 23:13:00 +0100 Subject: Errors using HttpUseridModule In-Reply-To: References: Message-ID: <3bdaaea8b02770cd1f640781fd38e44a@ruby-forum.com> Thank you for the answer Francis! >Is there any pattern to the errors that you can see? > >Are they all coming from a particular browser version, for example? For now, I can't see any pattern and the error logs (how they are formatted now) does not help me, they don't pass the user-agent data. >They are unnecessary, but I think that they are possibly allowed by >rfc 2109. Yes, I think if that is the case, nginx could be more permissive... >It looks to me like something -- client, server, or just the display -- >has gotten confused and mixed together the Cookie: header with probably >an Accept: header. I don't think there's much that can be done about >this, apart from try to identify the culprit and see if it is something >repeatable and fixable. I see that in this case maybe de client corrupted de cookie data unfortunately.. Anyway I will leave here another log entry with the same error but a different value in the cookie. 2014/01/15 23:46:50 [error] 1577#0: *18789665 client sent too short userid cookie "sid=", client: xx.xx.xx.xx, server: xxxxxxx, request: "GET /xxxxx HTTP/1.1", host: "xxxxxxx", referrer: "http://xxxxxxxxxxxxx" -- Posted via http://www.ruby-forum.com/. From lists at ruby-forum.com Thu Jan 16 22:15:55 2014 From: lists at ruby-forum.com (Gabriel Arrais) Date: Thu, 16 Jan 2014 23:15:55 +0100 Subject: Errors using HttpUseridModule In-Reply-To: <20140116213701.GQ19804@craic.sysops.org> References: <20140116213701.GQ19804@craic.sysops.org> Message-ID: <3b39e34909310915a3af6dd0cea80691@ruby-forum.com> Thank you for the answer Francis! Francis Daly wrote in post #1133390: > On Thu, Jan 16, 2014 at 09:41:25PM +0100, Gabriel Arrais wrote: > > Hi there, > >> We're receiving a lot of errors lately concerning the format of the >> userid cookie. > > Is there any pattern to the errors that you can see? > > Are they all coming from a particular browser version, for example? For now, I can't see any pattern and the error logs (how they are formatted now) does not help me, they don't pass the user-agent data. > >> [error] 1581#0: *20523638 client sent invalid userid cookie >> "sid="Cvwk2lLYLvhh3gYtDscPAg=="; $Path="/"" while reading response > > It looks to me like nginx does not expect the quotes after the =. > > They are unnecessary, but I think that they are possibly allowed by > rfc 2109. > > If they are allowed, then nginx should probably be changed to accept > them. If they are not, then the client should probably be changed not > to send them. > Yes, I think if that is the case, nginx could be more permissive... >> [error] 1582#0: *17018740 client sent too short userid cookie >> "sid=Cvwkcept: */*", client: xx.xx.xx.xx, server: xxxxxxxxx, request: > > It looks to me like something -- client, server, or just the display -- > has gotten confused and mixed together the Cookie: header with probably > an Accept: header. I don't think there's much that can be done about > this, apart from try to identify the culprit and see if it is something > repeatable and fixable. I see that in this case maybe de client corrupted de cookie data unfortunately.. Anyway I will leave here another log entry with the same error but a different value in the cookie. 2014/01/15 23:46:50 [error] 1577#0: *18789665 client sent too short userid cookie "sid=", client: xx.xx.xx.xx, server: xxxxxxx, request: "GET /xxxxx HTTP/1.1", host: "xxxxxxx", referrer: "http://xxxxxxxxxxxxx" > > If it is an identifiable problem in nginx, then I'm sure there'll be > interest in fixing it. > > Cheers, > > f > -- > Francis Daly francis at daoine.org -- Posted via http://www.ruby-forum.com/. From francis at daoine.org Thu Jan 16 23:08:45 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 16 Jan 2014 23:08:45 +0000 Subject: Errors using HttpUseridModule In-Reply-To: <3b39e34909310915a3af6dd0cea80691@ruby-forum.com> References: <20140116213701.GQ19804@craic.sysops.org> <3b39e34909310915a3af6dd0cea80691@ruby-forum.com> Message-ID: <20140116230845.GR19804@craic.sysops.org> On Thu, Jan 16, 2014 at 11:15:55PM +0100, Gabriel Arrais wrote: > Francis Daly wrote in post #1133390: Hi there, > > Are they all coming from a particular browser version, for example? > > For now, I can't see any pattern and the error logs (how they are > formatted now) does not help me, they don't pass the user-agent data. Error logs usually don't include that information. Access logs frequently do, and it is often possible to tie log lines together based on timestamp and url. If you have the information, it may be a useful data point to see whether the problems are associated with only one specific old version of a browser, where newer versions of the same thing browser no problem, for example. > > If they are allowed, then nginx should probably be changed to accept > > them. If they are not, then the client should probably be changed not > > to send them. > > Yes, I think if that is the case, nginx could be more permissive... Reading more closely, I still can't tell whether the surrounding quotes are actively forbidden; but they are certainly advised against unless the cookie was flagged as new, which I think nginx default ones are not. I would tend to invite the client to become fixed, or invite the user to change clients. But that's because it costs me nothing to do so. > Anyway I will leave here another log entry with the same > error but a different value in the cookie. > > 2014/01/15 23:46:50 [error] 1577#0: *18789665 client sent too short > userid cookie "sid=", client: xx.xx.xx.xx, server: xxxxxxx, request: There the cookie named "sid" has no value. I suspect more information will be needed to know what is happening here -- was it somehow set blank by something else running through nginx? Or is it an attempt by the client to see what happens when they manually change things? Good luck with it, f -- Francis Daly francis at daoine.org From multiformeingegno at gmail.com Fri Jan 17 00:13:32 2014 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Fri, 17 Jan 2014 01:13:32 +0100 Subject: fastcgi_cache_path empty Message-ID: Thanks Steve for the reply!! Ok, so tell me if I understood correcty. You just have in your "vhost" server block this: fastcgi_cache_path /var/cache/nginx levels=1:2 keys_zone=microcache:10m max_size=1000m inactive=60m; and then you have a file /etc/nginx/microcache with # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set if ($http_cookie ~* "_mcnc") { set $no_cache "1"; } # Bypass cache if flag is set fastcgi_no_cache $no_cache; fastcgi_cache_bypass $no_cache; fastcgi_cache microcache; fastcgi_cache_key "$scheme$request_method$host$request_uri $http_if_modified_since$http_if_none_match"; fastcgi_cache_valid 404 30m; fastcgi_cache_valid 200 10s; fastcgi_max_temp_file_size 1M; fastcgi_cache_use_stale updating; fastcgi_pass_header Set-Cookie; fastcgi_pass_header Cookie; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; .. correct? -------------- next part -------------- An HTML attachment was scrubbed... URL: From renenglish at gmail.com Fri Jan 17 03:24:50 2014 From: renenglish at gmail.com (Shafreeck Sea) Date: Fri, 17 Jan 2014 11:24:50 +0800 Subject: Does it possible to submit duplicated request with the proxy_next_upstream on In-Reply-To: <123db0bd27237e4ffcac80d3e8b76f57.NginxMailingListEnglish@forum.nginx.org> References: <54127272-C95E-4CC8-BEEA-D95DD2F2580C@gmail.com> <123db0bd27237e4ffcac80d3e8b76f57.NginxMailingListEnglish@forum.nginx.org> Message-ID: OK. Thank you very much . I will do an experiment to find out this 2014/1/16 itpp2012 > renenglish Wrote: > ------------------------------------------------------- > > Sorry I can?t get it . > > > > If host A has added the counter and failed to response, the request > > would be failed over to host B with successful response, so the > > counter would be added twines. Wouldn?t it ? > > Then a condition must occur where host A fails right after processing the > request, they usually fail before accepting a request, it also depends on > the timeout for a request. And it also depends what nginx considers a fail, > nginx might not fail a host when it does not return from accepting a > request. > > You will have to simulate this to find out the conditions for a fail. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,245979,246505#msg-246505 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 17 05:06:29 2014 From: nginx-forum at nginx.us (jamesandrewyoung) Date: Fri, 17 Jan 2014 00:06:29 -0500 Subject: Nginx configuration needed to dynamically rewrite a subdirectory to a subdomain Message-ID: How to configure NGINX daemon so that blog.xxx.com becomes xxx.com/blog Can anyone help me? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246545,246545#msg-246545 From nginx-forum at nginx.us Fri Jan 17 09:11:04 2014 From: nginx-forum at nginx.us (renenglish) Date: Fri, 17 Jan 2014 04:11:04 -0500 Subject: Add proxy_next_upstream_action to distinguish diffrient network actions In-Reply-To: References: Message-ID: Anyone will be interested in this ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246461,246552#msg-246552 From eirikur at nilsson.is Fri Jan 17 12:15:25 2014 From: eirikur at nilsson.is (=?ISO-8859-1?Q?Eir=EDkur_Nilsson?=) Date: Fri, 17 Jan 2014 12:15:25 +0000 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: <20140116220111.GH1835@mdounin.ru> References: <20140114225331.GO1835@mdounin.ru> <20140115161523.GU1835@mdounin.ru> <20140116220111.GH1835@mdounin.ru> Message-ID: Hey Maxim On Thu, Jan 16, 2014 at 10:01 PM, Maxim Dounin wrote: > > > Please try the following patch: Wow, everything seems to work correctly with this patch, session reuse and everything. I thought this patch would close the new connection, causing a fresh reconnect, but I don't see that happening in the capture. Is nginx holding on to - and trying to use - a handle to the old connection, which needs cleanup? Do you think this patch might get applied into the mainline? Best regards, Eirikur -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 17 12:40:40 2014 From: nginx-forum at nginx.us (rge3) Date: Fri, 17 Jan 2014 07:40:40 -0500 Subject: A 503 page gets written to my proxy cache, overwriting the 200 In-Reply-To: <20140116220816.GI1835@mdounin.ru> References: <20140116220816.GI1835@mdounin.ru> Message-ID: Maxim Dounin Wrote: > The "proxy_cache_valid" directives are used if there are no > Cache-Control/Expires to allow caching (or they are ignored with > proxy_ignore_headers). That is, with "proxy_cache_valid" you can > cache something which isn't normally cached, but they don't > prevent caching of other responses. That was my missing piece. Thanks so much!! Now it makes sense! -R Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246507,246558#msg-246558 From nginx-list at puzzled.xs4all.nl Fri Jan 17 13:57:09 2014 From: nginx-list at puzzled.xs4all.nl (Patrick Lists) Date: Fri, 17 Jan 2014 14:57:09 +0100 Subject: Nginx configuration needed to dynamically rewrite a subdirectory to a subdomain In-Reply-To: References: Message-ID: <52D936B5.5090200@puzzled.xs4all.nl> On 17-01-14 06:06, jamesandrewyoung wrote: > How to configure NGINX daemon so that blog.xxx.com becomes xxx.com/blog > > Can anyone help me? http://lmgtfy.com/?q=nginx+rewrite Regards, Patrick From lists at ruby-forum.com Fri Jan 17 14:23:07 2014 From: lists at ruby-forum.com (Amin MG) Date: Fri, 17 Jan 2014 15:23:07 +0100 Subject: Can't start nginx In-Reply-To: <808d0ac45d2eeeabbc7dbf3881810c7b@ruby-forum.com> References: <808d0ac45d2eeeabbc7dbf3881810c7b@ruby-forum.com> Message-ID: <9dac6afc6e6010f6cdd23c5fecae68e6@ruby-forum.com> Hello exuce me for posting again in this old question i have problem for starting nginx i tried [root at server libmemcached-0.48]# sh -x /etc/init.d/nginx stop + . /etc/rc.d/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=server.ghoghnooschat.ir ++ GATEWAY=144.76.228.1 + '[' yes = no ']' + nginx=/usr/sbin/nginx ++ basename /usr/sbin/nginx + prog=nginx + NGINX_CONF_FILE=/etc/nginx/nginx.conf + lockfile=/var/lock/subsys/nginx + case "$1" in + rh_status_q + rh_status + stop + echo -n 'Stopping nginx: ' Stopping nginx: + killproc nginx -QUIT + local RC killlevel= base pid pid_file= delay try + RC=0 + delay=3 + try=0 + '[' 2 -eq 0 ']' + '[' nginx = -p ']' + '[' nginx = -d ']' + '[' -n -QUIT ']' + killlevel=-QUIT + base=nginx + __pids_var_run nginx '' + local base=nginx + local pid_file=/var/run/nginx.pid + pid= + '[' -f /var/run/nginx.pid ']' + local line p + '[' '!' -r /var/run/nginx.pid ']' + : + read line + '[' -z 25826 ']' + for p in '$line' + '[' -z '' -a -d /proc/25826 ']' + pid=' 25826' + : + read line + '[' -z '' ']' + break + '[' -n ' 25826' ']' + return 0 + RC=0 + '[' -z ' 25826' ']' + '[' -n ' 25826' ']' + '[' color = verbose -a -z '' ']' + '[' -z -QUIT ']' + checkpid 25826 + local i + for i in '$*' + '[' -d /proc/25826 ']' + return 0 + kill -QUIT 25826 + RC=0 + '[' 0 -eq 0 ']' + success 'nginx -QUIT' + '[' color '!=' verbose -a -z '' ']' + echo_success + '[' color = color ']' + echo -en '\033[60G' + echo -n '[' [+ '[' color = color ']' + echo -en '\033[0;32m' + echo -n ' OK ' OK + '[' color = color ']' + echo -en '\033[0;39m' + echo -n ']' ]+ echo -ne '\r' + return 0 + return 0 + '[' -z -QUIT ']' + return 0 + retval=0 + echo + '[' 0 -eq 0 ']' + rm -f /var/lock/subsys/nginx + return 0 and then : [root at server libmemcached-0.48]# sh -x /etc/init.d/nginx start + . /etc/rc.d/init.d/functions ++ TEXTDOMAIN=initscripts ++ umask 022 ++ PATH=/sbin:/usr/sbin:/bin:/usr/bin ++ export PATH ++ '[' -z '' ']' ++ COLUMNS=80 ++ '[' -z '' ']' +++ /sbin/consoletype ++ CONSOLETYPE=pty ++ '[' -f /etc/sysconfig/i18n -a -z '' -a -z '' ']' ++ . /etc/profile.d/lang.sh ++ unset LANGSH_SOURCED ++ '[' -z '' ']' ++ '[' -f /etc/sysconfig/init ']' ++ . /etc/sysconfig/init +++ BOOTUP=color +++ RES_COL=60 +++ MOVE_TO_COL='echo -en \033[60G' +++ SETCOLOR_SUCCESS='echo -en \033[0;32m' +++ SETCOLOR_FAILURE='echo -en \033[0;31m' +++ SETCOLOR_WARNING='echo -en \033[0;33m' +++ SETCOLOR_NORMAL='echo -en \033[0;39m' +++ PROMPT=yes +++ AUTOSWAP=no +++ ACTIVE_CONSOLES='/dev/tty[1-6]' +++ SINGLE=/sbin/sushell ++ '[' pty = serial ']' ++ __sed_discard_ignored_files='/\(~\|\.bak\|\.orig\|\.rpmnew\|\.rpmorig\|\.rpmsave\)$/d' + . /etc/sysconfig/network ++ NETWORKING=yes ++ HOSTNAME=server.ghoghnooschat.ir ++ GATEWAY=144.76.228.1 + '[' yes = no ']' + nginx=/usr/sbin/nginx ++ basename /usr/sbin/nginx + prog=nginx + NGINX_CONF_FILE=/etc/nginx/nginx.conf + lockfile=/var/lock/subsys/nginx + case "$1" in + rh_status_q + rh_status + exit 0 and my nginx don't start! what should i do ? -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Fri Jan 17 17:01:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 17 Jan 2014 21:01:38 +0400 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: References: <20140114225331.GO1835@mdounin.ru> <20140115161523.GU1835@mdounin.ru> <20140116220111.GH1835@mdounin.ru> Message-ID: <20140117170138.GC1835@mdounin.ru> Hello! On Fri, Jan 17, 2014 at 12:15:25PM +0000, Eir?kur Nilsson wrote: > Hey Maxim > > On Thu, Jan 16, 2014 at 10:01 PM, Maxim Dounin wrote: > > > > > > Please try the following patch: > > > Wow, everything seems to work correctly with this patch, session reuse and > everything. I thought this patch would close the new connection, causing a > fresh reconnect, but I don't see that happening in the capture. Is nginx > holding on to - and trying to use - a handle to the old connection, which > needs cleanup? The problem is that read event was removed after ssl handshake, and never added back - so new data on client connection were not reported by the kernel. The ngx_handle_read_event() call ensures that an appropriate event is added again. > Do you think this patch might get applied into the mainline? Yes, after some more testing and review. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Fri Jan 17 17:07:33 2014 From: lists at ruby-forum.com (Gabriel Arrais) Date: Fri, 17 Jan 2014 18:07:33 +0100 Subject: Errors using HttpUseridModule In-Reply-To: <20140116230845.GR19804@craic.sysops.org> References: <20140116213701.GQ19804@craic.sysops.org> <3b39e34909310915a3af6dd0cea80691@ruby-forum.com> <20140116230845.GR19804@craic.sysops.org> Message-ID: <7a6caa94847711570c5fa882a6516f2d@ruby-forum.com> Francis Daly wrote in post #1133402: > On Thu, Jan 16, 2014 at 11:15:55PM +0100, Gabriel Arrais wrote: >> Francis Daly wrote in post #1133390: > > Hi there, > >> > Are they all coming from a particular browser version, for example? >> >> For now, I can't see any pattern and the error logs (how they are >> formatted now) does not help me, they don't pass the user-agent data. > > Error logs usually don't include that information. Access logs > frequently > do, and it is often possible to tie log lines together based on > timestamp > and url. If you have the information, it may be a useful data point to > see whether the problems are associated with only one specific old > version > of a browser, where newer versions of the same thing browser no problem, > for example. > I will try do this. >> > If they are allowed, then nginx should probably be changed to accept >> > them. If they are not, then the client should probably be changed not >> > to send them. >> >> Yes, I think if that is the case, nginx could be more permissive... > > Reading more closely, I still can't tell whether the surrounding quotes > are actively forbidden; but they are certainly advised against unless > the cookie was flagged as new, which I think nginx default ones are not. > > I would tend to invite the client to become fixed, or invite the user > to change clients. But that's because it costs me nothing to do so. > It's very difficult to do this in our product =/ >> Anyway I will leave here another log entry with the same >> error but a different value in the cookie. >> >> 2014/01/15 23:46:50 [error] 1577#0: *18789665 client sent too short >> userid cookie "sid=", client: xx.xx.xx.xx, server: xxxxxxx, request: > > There the cookie named "sid" has no value. I suspect more information > will be needed to know what is happening here -- was it somehow set > blank by something else running through nginx? I have nothing running through nginx that could be corrupting this cookie > Or is it an attempt by > the client to see what happens when they manually change things? > Maybe, but I don't think that it is the case... > Good luck with it, > > f > -- > Francis Daly francis at daoine.org Thank you again Francis! Gabriel Arrais gabriel.arrais at vtex.com.br -- Posted via http://www.ruby-forum.com/. From eirikur at nilsson.is Fri Jan 17 17:23:13 2014 From: eirikur at nilsson.is (=?ISO-8859-1?Q?Eir=EDkur_Nilsson?=) Date: Fri, 17 Jan 2014 17:23:13 +0000 Subject: Websocket tunnel broken with existing SSL session In-Reply-To: <20140117170138.GC1835@mdounin.ru> References: <20140114225331.GO1835@mdounin.ru> <20140115161523.GU1835@mdounin.ru> <20140116220111.GH1835@mdounin.ru> <20140117170138.GC1835@mdounin.ru> Message-ID: On Fri, Jan 17, 2014 at 5:01 PM, Maxim Dounin wrote: > > > The problem is that read event was removed after ssl handshake, > and never added back - so new data on client connection were not > reported by the kernel. The ngx_handle_read_event() call ensures > that an appropriate event is added again. > > > Do you think this patch might get applied into the mainline? > > Yes, after some more testing and review. Ahh, that makes sense. Silly me and my poor C reading skills. I didn't even look at ngx_handle_read_event since I'm not used to seeing side-effects in a conditional statement, but I guess that's a standard pattern in C to handle errors. :) I only wonder why I couldn't see anyone else experiencing this, with me reproducing it on so many versions with a basic configuration. I guess this only happens on Solaris? Thank you so much for your help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From multiformeingegno at gmail.com Fri Jan 17 17:40:37 2014 From: multiformeingegno at gmail.com (Lorenzo Raffio) Date: Fri, 17 Jan 2014 18:40:37 +0100 Subject: Targeting homepage (not sub pages/dirs/) Message-ID: What should I add to this directive to target also the home page (root of the website, e.g. website.com)? I already have 'index.php' in the directive but what if I visit website.com(without /index.php)? if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") I think I need to add another if, this time not with ~* though but with = Something like if ($request_uri = "(/)") ..? Thanks in advance! -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jan 17 18:30:24 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 17 Jan 2014 18:30:24 +0000 Subject: Targeting homepage (not sub pages/dirs/) In-Reply-To: References: Message-ID: <20140117183024.GS19804@craic.sysops.org> On Fri, Jan 17, 2014 at 06:40:37PM +0100, Lorenzo Raffio wrote: Hi there, > What should I add to this directive to target also the home page (root of > the website, e.g. website.com)? I'd suggest location = / {} as being simplest. But if you want it to be part of the already-present regex, then you want to match either "exactly /" or "starts with /?". "exactly" is "start of string, /, end of string". "starts with" is "start of string, /, ?". Check your regex manual for which bits need to be escaped and which bits mean "start" and "end". Good luck with it, f -- Francis Daly francis at daoine.org From reeteshr at outlook.com Fri Jan 17 18:35:49 2014 From: reeteshr at outlook.com (Reetesh Ranjan) Date: Sat, 18 Jan 2014 00:05:49 +0530 Subject: Open Text Summarizer Upstream Module 1.0 Release Message-ID: Hi, I have developed a highly efficient version of OTS - the popular open source text summarizer s/w. For few documents, while OTS takes about 40ms to produce text summary, my version takes around 8ms only. I created a service using my version that listens to summary requests and provide summaries. I have developed an nginx upstream module for this service. You can use it in web sites that involve showing summaries of documents and would be thinking about performance due to scale and other features. Performance note: the service uses select and non-blocking socket I/O for communicating to client. Nginx upstream module for Summarizer:https://github.com/reeteshranjan/summarizer-nginx-module Highly efficient version of OTS:https://github.com/reeteshranjan/summarizer Original OTS:https://github.com/neopunisher/Open-Text-Summarizerhttp://libots.sourceforge.net/ Regards,Reetesh -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 17 21:52:31 2014 From: nginx-forum at nginx.us (PieterVI) Date: Fri, 17 Jan 2014 16:52:31 -0500 Subject: Rate limiting intervals Message-ID: <0e3677ec52d980739ef2803ad0e75ced.NginxMailingListEnglish@forum.nginx.org> Hello, With the rate limiting module you can easily rate limit based on seconds or on minutes. What I would like to do however is rate limit based on a 100 millisecond or 10 millisecond interval. That way you do not have a burst of requests at the beginning of a second. But a more continuous flow of requests. Is this possible with nginx? If not can anyone point me to the right location in the source where the rate limiting is actually done. (I assume it has something to do with ngx_http_limit_req_lookup in ngx_http_limit_req_module.c) Thanks in advance, Pieter Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246575,246575#msg-246575 From vbart at nginx.com Fri Jan 17 22:28:27 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 18 Jan 2014 02:28:27 +0400 Subject: Rate limiting intervals In-Reply-To: <0e3677ec52d980739ef2803ad0e75ced.NginxMailingListEnglish@forum.nginx.org> References: <0e3677ec52d980739ef2803ad0e75ced.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7946384.yB9ap9E98y@vbart-laptop> On Friday 17 January 2014 16:52:31 PieterVI wrote: > Hello, > > With the rate limiting module you can easily rate limit based on seconds or > on minutes. > What I would like to do however is rate limit based on a 100 millisecond or > 10 millisecond interval. [..] 100r/s is an equivalent of 1 request per 10 milliseconds. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jan 17 22:39:29 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 17 Jan 2014 17:39:29 -0500 Subject: try_files (source) Message-ID: In ngx_http_core_try_files_phase (ngx_http_core_module.c) I can see how $uri $uri/ =404 are handled, but for example in; try_files /test.html $uri $uri/ =404; where(or how) is /test.html handled? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246577,246577#msg-246577 From nginx-forum at nginx.us Fri Jan 17 23:26:09 2014 From: nginx-forum at nginx.us (PieterVI) Date: Fri, 17 Jan 2014 18:26:09 -0500 Subject: Rate limiting intervals In-Reply-To: <7946384.yB9ap9E98y@vbart-laptop> References: <7946384.yB9ap9E98y@vbart-laptop> Message-ID: Hi Valentin, I know that 100r/s is equal to 1 request per 10 milliseconds. If you specifiy 100r/s nginx will send 100 requests within the first milliseconds of a certain second. Once these request are done no request will be handled anymore. When you would be able to specifiy more granular rate limiting interval you whould be able to get a more continuous load. see: http://imagebin.org/287286 Kind regards, Pieter Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246575,246578#msg-246578 From vbart at nginx.com Sat Jan 18 00:00:17 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 18 Jan 2014 04:00:17 +0400 Subject: Rate limiting intervals In-Reply-To: References: <7946384.yB9ap9E98y@vbart-laptop> Message-ID: <6751128.IkDYi5ymLk@vbart-laptop> On Friday 17 January 2014 18:26:09 PieterVI wrote: > Hi Valentin, > > I know that 100r/s is equal to 1 request per 10 milliseconds. > > If you specifiy 100r/s nginx will send 100 requests within the first > milliseconds of a certain second. [..] Yes, but only if you have set burst=100. There is no second/minute granularity in limit_req module. It's just a measure. If you set 100r/s with zero burst, then every request received in less than 10 milliseconds after the previous permitted one will be declined. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sat Jan 18 19:17:29 2014 From: nginx-forum at nginx.us (PieterVI) Date: Sat, 18 Jan 2014 14:17:29 -0500 Subject: Rate limiting intervals In-Reply-To: <6751128.IkDYi5ymLk@vbart-laptop> References: <6751128.IkDYi5ymLk@vbart-laptop> Message-ID: <5a27439d4714b192c1a238756f42d400.NginxMailingListEnglish@forum.nginx.org> Hi Valentin, Thanks for the info. That indeed does seem to work as you mention. I have to figure out what else is going wrong then. Thanks, Pieter Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246575,246583#msg-246583 From nginx-forum at nginx.us Sat Jan 18 20:58:22 2014 From: nginx-forum at nginx.us (sergiks) Date: Sat, 18 Jan 2014 15:58:22 -0500 Subject: nested location, proxy, alias and multiple "/" locations Message-ID: Ciao, I'm setting up several isolated Laravel apps in sub-folders of a single site. Web root folder of Laravel is called "public", and I want to access such installation by URI "/app1/". There are static files, maybe few custom php, and a single entry point `/index.php`. So I came up with a config like this: [code] location ^~ /app1 { root /var/www/apps.mydomain.com/Laravel_app1/public; rewrite ^/app1/?(.*)$ /$1 break; location ~* \.(jpg|gif|png)$ { try_files $uri =404; ... } location ~* !(\.(jpg|gif|png))$ { proxy_pass http://127.0.0.1:8081; ... } } [/code] Two questions: 1. what happens to an "alias" inside a "^~" location like "location ^~ /app1 { ... }" ? seems like $uri is not changed and "/abcdef" part remains in place. 2. how can I write a nested default "/" location after a rewrite and a regexp location? Got [emerg] errors when trying to write it like this: location ^~ /app1 { rewrite ^/app1/?(.*)$ /$1 break; location ~* \.(jpg|gif|png)$ { ...static files instructions... } location / { proxy_pass ...php files and folders go to Laravel... } } Serge. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246584,246584#msg-246584 From francis at daoine.org Sun Jan 19 01:02:57 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Jan 2014 01:02:57 +0000 Subject: nested location, proxy, alias and multiple "/" locations In-Reply-To: References: Message-ID: <20140119010257.GT19804@craic.sysops.org> On Sat, Jan 18, 2014 at 03:58:22PM -0500, sergiks wrote: Hi there, > Web root folder > of Laravel is called "public", and I want to access such installation by URI > "/app1/". There are static files, maybe few custom php, and a single entry > point `/index.php`. I'm not sure what you're trying do to. Maybe your question is clear to other people, in which case, perhaps they'll answer. If not, could you try again giving examples of "this url should result in nginx serving this file", and "this url should result in nginx proxying to this other url", and whatever else you want nginx to do? > 1. what happens to an "alias" inside a "^~" location like "location ^~ /app1 > { ... }" ? seems like $uri is not changed and "/abcdef" part remains in > place. It should do what the docs say -- http://nginx.org/r/alias Can you describe what you see, and how it differs from what you expect? (And show the config you use -- there's no "alias" in the sample provided). (Note that there are some bugs relating to "alias" and "try_files".) > 2. how can I write a nested default "/" location after a rewrite and a > regexp location? Got [emerg] errors when trying to write it like this: > location ^~ /app1 { > rewrite ^/app1/?(.*)$ /$1 break; > location ~* \.(jpg|gif|png)$ { ...static files instructions... } > location / { proxy_pass ...php files and folders go to Laravel... } > } I'm not sure that that combination is possible. Are you trying to do something different from what proxy_pass http://127.0.0.1:8081/; would do? Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Jan 19 08:14:46 2014 From: nginx-forum at nginx.us (mojiz) Date: Sun, 19 Jan 2014 03:14:46 -0500 Subject: understanding proxy_buffering Message-ID: <4f90d0f338a18d387b699d8a83e37dd7.NginxMailingListEnglish@forum.nginx.org> Hi I'm trying to setup a reverse proxy for some private downloads. Here is our setup: 3 Storage servers with High capacity but slow HDDs running nginx 1 loadbalancing server with SSD and high internet uplink. my file sizes are several hundred megabytes (500+ up to 2GB) running nginx downloaders are on slow connections with download managers with up to 16 connections for each file here is what I want to do: a user sends a request to the SSD server, the ssd server requests the file from Slow servers and caches to response to its fast HDD and serving it to the client. But If I use the proxy_cache , the file serving has to wait till the file has been completly transfered and cache on the SSD disk wich (if several files are requested at the same time) results in a slow connection and timeout or other errors on the client side. so this is not an option. However I think proxy_buffering is answer to my problem, I think this means each part of the requested file (defined by ranges header) is cached independently. 1. Am I right? If I'm right then 2. how can I tell the nginx, to buffer like 5mb of requested part in memory (and the excess on the SSD disk) and serve the file to the client until the client has downloaded the part and then request another 5mb? I'm looking for a setting like output_buffers 1 5m; but for the proxied file. 3. Is there a better solution? Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246586,246586#msg-246586 From oscaretu at gmail.com Sun Jan 19 08:32:59 2014 From: oscaretu at gmail.com (oscaretu .) Date: Sun, 19 Jan 2014 09:32:59 +0100 Subject: understanding proxy_buffering In-Reply-To: <4f90d0f338a18d387b699d8a83e37dd7.NginxMailingListEnglish@forum.nginx.org> References: <4f90d0f338a18d387b699d8a83e37dd7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello One side question. Have you calculated a estimation of the expected life of a SSD disk when you are writing on it continously? I suppose that in such a situation it will die "quickly", due to the limited number of writes that the memory can support before getting damaged. Greetings. Oscar On Sun, Jan 19, 2014 at 9:14 AM, mojiz wrote: > Hi > I'm trying to setup a reverse proxy for some private downloads. Here is our > setup: > 3 Storage servers with High capacity but slow HDDs running nginx > 1 loadbalancing server with SSD and high internet uplink. > my file sizes are several hundred megabytes (500+ up to 2GB) running nginx > downloaders are on slow connections with download managers with up to 16 > connections for each file > > here is what I want to do: > a user sends a request to the SSD server, the ssd server requests the file > from Slow servers and caches to response to its fast HDD and serving it to > the client. But If I use the proxy_cache , the file serving has to wait > till > the file has been completly transfered and cache on the SSD disk wich (if > several files are requested at the same time) results in a slow connection > and timeout or other errors on the client side. so this is not an option. > > However I think proxy_buffering is answer to my problem, I think this means > each part of the requested file (defined by ranges header) is cached > independently. > 1. Am I right? > If I'm right then > 2. how can I tell the nginx, to buffer like 5mb of requested part in memory > (and the excess on the SSD disk) and serve the file to the client until the > client has downloaded the part and then request another 5mb? > I'm looking for a setting like output_buffers 1 5m; but for the proxied > file. > 3. Is there a better solution? > > Regards > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246586,246586#msg-246586 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jan 19 09:57:27 2014 From: nginx-forum at nginx.us (sergiks) Date: Sun, 19 Jan 2014 04:57:27 -0500 Subject: nested location, proxy, alias and multiple "/" locations In-Reply-To: <20140119010257.GT19804@craic.sysops.org> References: <20140119010257.GT19804@craic.sysops.org> Message-ID: >> I'm not sure what you're trying do to. My bad, I'll explain it in other way. There's a web root /var/www/site/ that responds to http://www.site.com Then there's a Laravel (front controller php framework) installation in /var/www/Laravel1, and its web root folder is in /var/www/Laravel1/public/ (index.php and static files are there) I want to let that Laravel app to respond to URIs under /app1/: http://www.site.com/app1/ http://www.site.com/app1/index.php (same - but I had problems that the file was downloading as plaintext in my experiments) http://www.site.com/app1/api/method (pretty urls) http://www.site.com/app1/css/bootstram.min.css (static files) There will be more that one such app sitting under various subfolders of a single web site. -------------- >> 1. what happens to an "alias" inside a "^~" location like >> "location ^~ /app1 { ... }" >> ? seems like $uri is not changed >> and "/abcdef" part remains in place. > It should do what the docs say -- http://nginx.org/r/alias The docs only says about a simple "location /i/" case and a regexp case. My q is "location ^~ /i/" which seems to skip the replacement as in the simple case: location ^~ /app1/ { alias /var/www/Laravel/public/; proxy_pass http://127.0.0.1:8081; This example passes unchanged "/app1/api/method" to the proxy, instead of "/api/method" Re. #2 ? figured that out, thanks. Sergei Sokolov. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246584,246589#msg-246589 From francis at daoine.org Sun Jan 19 11:02:33 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 19 Jan 2014 11:02:33 +0000 Subject: nested location, proxy, alias and multiple "/" locations In-Reply-To: References: <20140119010257.GT19804@craic.sysops.org> Message-ID: <20140119110233.GU19804@craic.sysops.org> On Sun, Jan 19, 2014 at 04:57:27AM -0500, sergiks wrote: Hi there, I'll describe what I think you want you want nginx to do. Please correct me where I've guessed wrongly. > There's a web root /var/www/site/ that responds to http://www.site.com > Then there's a Laravel (front controller php framework) installation in > /var/www/Laravel1, and its web root folder is in /var/www/Laravel1/public/ > (index.php and static files are there) > I want to let that Laravel app to respond to URIs under /app1/: A request for /app1/one.png should return the file /var/www/Laravel1/public/one.png or respond 404. A similar mapping applies for every request that end in .png, .jpg, and .gif, case-insensitively. A request for /app1/two.txt should proxy_pass to http://127.0.0.1:8081/two.txt and return whatever it returns. A similar thing happens for every other request. http://127.0.0.1:8081/ is a separate web server which is running "real" Laravel. So, for the following requests, nginx should: > http://www.site.com/app1/ proxy to http://127.0.0.1:8081/ > http://www.site.com/app1/index.php proxy to http://127.0.0.1:8081/index.php > http://www.site.com/app1/api/method (pretty urls) proxy to http://127.0.0.1:8081/api/method > http://www.site.com/app1/css/bootstram.min.css (static files) proxy to http://127.0.0.1:8081/css/bootstram.min.css > The docs only says about a simple "location /i/" case > and a regexp case. "^~" is a prefix location. The "non-regex" documentation applies. > My q is "location ^~ /i/" which seems to skip the replacement as in the > simple case: > location ^~ /app1/ { > alias /var/www/Laravel/public/; > proxy_pass http://127.0.0.1:8081; > This example passes unchanged "/app1/api/method" to the proxy, instead of > "/api/method" That's clear, thanks. That is working as intended. Your expectation is wrong. "alias" (along with "root") does not affect "proxy_pass". Does the following do what you want? === location ^~ /app1/ { alias /var/www/Laravel1/public/; proxy_pass http://127.0.0.1:8081/; location ~* \.(jpg|gif|png)$ {} } === (In general, unless the proxied server is careful, there are likely to be problems trying to change parts of the url as is done above.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Jan 19 11:33:56 2014 From: nginx-forum at nginx.us (mojiz) Date: Sun, 19 Jan 2014 06:33:56 -0500 Subject: understanding proxy_buffering In-Reply-To: References: Message-ID: <0f3cf98d3bd41232f2c62c0217add160.NginxMailingListEnglish@forum.nginx.org> Hadn't thought of that We could still use SAS 15K drives ,anyway even if the ssd thing works for a year or two I think the advantage will cover the cost Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246586,246595#msg-246595 From oscaretu at gmail.com Sun Jan 19 11:48:40 2014 From: oscaretu at gmail.com (oscaretu .) Date: Sun, 19 Jan 2014 12:48:40 +0100 Subject: understanding proxy_buffering In-Reply-To: <0f3cf98d3bd41232f2c62c0217add160.NginxMailingListEnglish@forum.nginx.org> References: <0f3cf98d3bd41232f2c62c0217add160.NginxMailingListEnglish@forum.nginx.org> Message-ID: ?One year or two?. I think that is a very optimist estimation. Extrapolating the information I get from a Windows program I execute in my laptop when I install software (it is a moment where you are writing files to the disk), I suppose you should expect a very much shorter life for your SSD disk. Perhaps anyone in the list have real experience in that subject... On Sun, Jan 19, 2014 at 12:33 PM, mojiz wrote: > Hadn't thought of that > We could still use SAS 15K drives ,anyway even if the ssd thing works for a > year or two I think the advantage will cover the cost > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246586,246595#msg-246595 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jan 19 16:06:58 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 19 Jan 2014 11:06:58 -0500 Subject: cookie bomb - how to protect? Message-ID: very interesting read: http://homakov.blogspot.de/2014/01/cookie-bomb-or-lets-break-internet.html from thze blogpost: "TL;DR I can craft a page "polluting" CDNs, blogging platforms and other major networks with my cookies. Your browser will keep sending those cookies and servers will reject the requests, because Cookie header will be very long. The entire Internet will look down to you. I have no idea if it's a known trick, but I believe it should be fixed. Severity: depends. I checked only with Chrome. We all know a cookie can only contain 4k of data. How many cookies can I creates? **Many!** What cookies is browser going to send with every request? **All of them!** How do servers usually react if the request is too long? **They don't respond** " i checked it, and it works, i get the following error back: 400 Bad Request Request Header Or Cookie Too Large my question: is there a generic way to check the size of such headers like cookies etc and to cut them off, or should we live with such malicious intent? regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246597,246597#msg-246597 From coderman at gmail.com Sun Jan 19 16:35:08 2014 From: coderman at gmail.com (coderman) Date: Sun, 19 Jan 2014 08:35:08 -0800 Subject: cookie bomb - how to protect? In-Reply-To: References: Message-ID: On Sun, Jan 19, 2014 at 8:06 AM, mex wrote: > very interesting read: > http://homakov.blogspot.de/2014/01/cookie-bomb-or-lets-break-internet.html > > .... > my question: is there a generic way to check the size of such headers like > cookies etc and to cut them off, or should we live with such malicious intent? no good one size fits all solution that i have found. trade off here and you worsen over there... i have worked on an internal system (not public endpoint, internal to DMZ only) where the request URL, or any one of the individual request header values could approach 32KBytes in size, with a full client or server header reaching 64+KB. we use a custom Nginx build to handle this on the internal proxy tier only, not public. the public endpoints respond with a custom empty json response body for all such 4xx/5xx errors instead of default 400 like above. i'd love to know of more elegant ways to handle this, with header specific handling - especially cookies, if possible... best regards, P.S. off-topic, but i have used this "feature" before to check for content middling proxies between me and endpoints. such headers often resulting in proxy errors or timeouts even when implemented in transparent trying to be inconspicuous mode. From coderman at gmail.com Sun Jan 19 16:38:49 2014 From: coderman at gmail.com (coderman) Date: Sun, 19 Jan 2014 08:38:49 -0800 Subject: cookie bomb - how to protect? In-Reply-To: References: Message-ID: On Sun, Jan 19, 2014 at 8:35 AM, coderman wrote: > .... > i'd love to know of more elegant ways to handle this, with header > specific handling - especially cookies, if possible... the less better way to change this is: http://nginx.org/en/docs/http/ngx_http_core_module.html#large_client_header_buffers a blunt aggregate rather than header or cookie specific constraints. best regards, From vbart at nginx.com Sun Jan 19 16:47:47 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 19 Jan 2014 20:47:47 +0400 Subject: cookie bomb - how to protect? In-Reply-To: References: Message-ID: <3513414.UKZmsEXzC1@vbart-laptop> On Sunday 19 January 2014 11:06:58 mex wrote: [..] > i checked it, and it works, i get the following error back: > > 400 Bad Request > > Request Header Or Cookie Too Large > > my question: is there a generic way to check the size of such headers like > cookies etc > and to cut them off, or should we live with such malicious intent? > [..] You can include into this "Request Header Or Cookie Too Large" error page a JS script that will clear cookies. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sun Jan 19 21:42:05 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 19 Jan 2014 16:42:05 -0500 Subject: cookie bomb - how to protect? In-Reply-To: References: Message-ID: <06cc818d3c1477995b683f5cf985e6ea.NginxMailingListEnglish@forum.nginx.org> hi coderman, icreasing the headerr_size is not a solution, since i look for a generic solution to circumvent the outcome of those malicious request. a possible way to handle this is a lighweight WAF-solution, lua comes to my mind :) regards, mex p.s. we're working on a lighweight lua-based waf as addition to naxsi; but this is very early alpha atm, more on this later. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246597,246602#msg-246602 From coderman at gmail.com Mon Jan 20 00:46:17 2014 From: coderman at gmail.com (coderman) Date: Sun, 19 Jan 2014 16:46:17 -0800 Subject: cookie bomb - how to protect? In-Reply-To: <06cc818d3c1477995b683f5cf985e6ea.NginxMailingListEnglish@forum.nginx.org> References: <06cc818d3c1477995b683f5cf985e6ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sun, Jan 19, 2014 at 1:42 PM, mex wrote: > hi coderman, > > icreasing the headerr_size is not a solution, since i look for a generic > solution to circumvent > the outcome of those malicious request. > > a possible way to handle this is a lighweight WAF-solution, > lua comes to my mind :) > ... > p.s. we're working on a lighweight lua-based waf as addition to naxsi; but > this is very > early alpha atm, more on this later. excellent! i agree this would be quite useful in general and appropriate for this specific situation. i'm fond of Lua for mysql-proxy, nmap, and other situations which share similar technical demands for extending built in behavior. i would love to know more as you make progress. best regards, From makailol7 at gmail.com Mon Jan 20 04:54:03 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 20 Jan 2014 10:24:03 +0530 Subject: duplicate Vary: Accept-Encoding header Message-ID: Hello, I use nginx/1.4.4 with gunzip = on and gzip_vary = on. This leads to a duplicate Vary Header. gzip_vary should do nothing if the header is already present: moki at mysrv:~$ curl -I http://192.168.1.196/home.html HTTP/1.1 200 OK Server: nginx/1.4.4 Date: Sun, 19 Jan 2014 11:30:59 GMT Content-Type: text/html Connection: keep-alive Vary: Accept-Encoding Vary: Accept-Encoding Location: home.html I have no control of the upstream server it may send a Vary header or not, in order to be safe I would like to use gzip_vary = on in order to prevent any problems here. This issue is in standard ngx_http_header_filter_module so can anyone suggest solution? Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Jan 20 07:41:52 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 20 Jan 2014 07:41:52 +0000 Subject: duplicate Vary: Accept-Encoding header In-Reply-To: References: Message-ID: On 20 January 2014 04:54, Makailol Charls wrote: > I use nginx/1.4.4 with gunzip = on and gzip_vary = on. This leads to a > duplicate Vary Header. [snip] > This issue is in standard ngx_http_header_filter_module so can anyone > suggest solution? Quick question: other than looking untidy, what's the actual problem that (does|can) occur when the header is duplicated? Jonathan From makailol7 at gmail.com Mon Jan 20 08:30:44 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 20 Jan 2014 14:00:44 +0530 Subject: duplicate Vary: Accept-Encoding header In-Reply-To: References: Message-ID: I have just noticed this header duplication but I could not notice any issue yet. But I believe, gzip_vary should do nothing if the header is already present. And it seems like a bug in core header_filter_module . Makailol On Mon, Jan 20, 2014 at 1:11 PM, Jonathan Matthews wrote: > On 20 January 2014 04:54, Makailol Charls wrote: > > I use nginx/1.4.4 with gunzip = on and gzip_vary = on. This leads to a > > duplicate Vary Header. > [snip] > > This issue is in standard ngx_http_header_filter_module so can anyone > > suggest solution? > > Quick question: other than looking untidy, what's the actual problem > that (does|can) occur when the header is duplicated? > > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From makailol7 at gmail.com Mon Jan 20 08:44:26 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 20 Jan 2014 14:14:26 +0530 Subject: How to define dynamic proxy_cache_path directory. Message-ID: Hello, I use Nginx/1.4.4 as a reverse proxy caching server for multiple sites. So far I have been using same proxy_cache_path for all sites. Now I want to use separate cache path for all sites. Is there anyway to make proxy_cache_path dynamic i.e. using some variable in proxy_cache_path like $host ? Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From black.fledermaus at arcor.de Mon Jan 20 11:18:53 2014 From: black.fledermaus at arcor.de (basti) Date: Mon, 20 Jan 2014 12:18:53 +0100 Subject: "ssl_session_cache" not working on windows Version Message-ID: <52DD061D.6060406@arcor.de> Hello, I have an nginx in front of an Windows IIS to delete some headers send my IIS from proprietary application. Now I try to use the following parameter on nginx running on windows: # SSL session cache #ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions #ssl_session_cache builtin:1000 shared:SSL:10m; both of them dosnt run. When I start nginx it will closed indemedaly, without any error on command line. Whats going on here? Can someone confirm this behaviour? I have try Version 1.4.4 and 1.5.8 from http://nginx.org/en/download.html. Regards, Basti From ru at nginx.com Mon Jan 20 11:27:07 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 20 Jan 2014 15:27:07 +0400 Subject: "ssl_session_cache" not working on windows Version In-Reply-To: <52DD061D.6060406@arcor.de> References: <52DD061D.6060406@arcor.de> Message-ID: <20140120112707.GA25944@lo0.su> On Mon, Jan 20, 2014 at 12:18:53PM +0100, basti wrote: > Hello, > I have an nginx in front of an Windows IIS to delete some headers send > my IIS from proprietary application. > Now I try to use the following parameter on nginx running on windows: > > # SSL session cache > #ssl_session_cache shared:SSL:10m; # a 1mb cache can hold about 4000 > sessions, so we can hold 40000 sessions > #ssl_session_cache builtin:1000 shared:SSL:10m; > > both of them dosnt run. > When I start nginx it will closed indemedaly, without any error on > command line. > Whats going on here? > Can someone confirm this behaviour? > > I have try Version 1.4.4 and 1.5.8 from http://nginx.org/en/download.html. http://nginx.org/en/docs/windows.html#known_issues The cache and other modules which require shared memory support do not work on Windows Vista and later versions due to address space layout randomization being enabled in these Windows versions. From mdounin at mdounin.ru Mon Jan 20 12:56:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Jan 2014 16:56:58 +0400 Subject: How to define dynamic proxy_cache_path directory. In-Reply-To: References: Message-ID: <20140120125657.GM1835@mdounin.ru> Hello! On Mon, Jan 20, 2014 at 02:14:26PM +0530, Makailol Charls wrote: > Hello, > > I use Nginx/1.4.4 as a reverse proxy caching server for multiple sites. So > far I have been using same proxy_cache_path for all sites. Now I want to > use separate cache path for all sites. > > Is there anyway to make proxy_cache_path dynamic i.e. using some variable > in proxy_cache_path like $host ? No, proxy_cache_path must be known without a request for cache loader and cache manager to work. Therefore it's not possible to use variables in proxy_cache_path, you have to explicitly define all caches you want to use. -- Maxim Dounin http://nginx.org/ From rvrv7575 at yahoo.com Mon Jan 20 13:40:30 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Mon, 20 Jan 2014 21:40:30 +0800 (SGT) Subject: Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again Message-ID: <1390225230.357.YahooMailNeo@web193505.mail.sg3.yahoo.com> Hello Is there a way we can achieve the following when nginx is acting as a reverse proxy 1. Client sends HTTP request with Accept-Encoding as gzip 2. Nginx proxy forwards the request with the request header intact 3. Origin server sends a compressed response 4. At the nginx proxy, we *decompress* the response, apply transformations on the response body and then *again* compress it In other words, is there a way to use the functionality of gzip and gunzip modules simultaneously for a processing a response and in a particular order -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 20 13:48:52 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 20 Jan 2014 17:48:52 +0400 Subject: Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again In-Reply-To: <1390225230.357.YahooMailNeo@web193505.mail.sg3.yahoo.com> References: <1390225230.357.YahooMailNeo@web193505.mail.sg3.yahoo.com> Message-ID: <20140120134852.GO1835@mdounin.ru> Hello! On Mon, Jan 20, 2014 at 09:40:30PM +0800, Rv Rv wrote: > Hello > Is there a way we can achieve the following when nginx is acting > as a reverse proxy > 1. Client sends HTTP request with Accept-Encoding as gzip > 2. Nginx proxy forwards the request with the request > header intact > 3. Origin server sends a compressed response > 4. At the nginx proxy, we *decompress* the response, apply > transformations on the response body and then *again* > compress it > In other words, is there a way to use the functionality of gzip > and gunzip modules simultaneously for a processing a response > and in a particular order As of now, it's not possible without code modifications - mostly because there is no way to tell gunzip filter you want it to always decompress a response. It can be achieved with minor code changes though. -- Maxim Dounin http://nginx.org/ From makailol7 at gmail.com Mon Jan 20 14:34:35 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 20 Jan 2014 20:04:35 +0530 Subject: Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again In-Reply-To: <20140120134852.GO1835@mdounin.ru> References: <1390225230.357.YahooMailNeo@web193505.mail.sg3.yahoo.com> <20140120134852.GO1835@mdounin.ru> Message-ID: Hello Maxim, Would you suggest the code change to achieve this? Thanks, Makailol On Mon, Jan 20, 2014 at 7:18 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 20, 2014 at 09:40:30PM +0800, Rv Rv wrote: > > > Hello > > Is there a way we can achieve the following when nginx is acting > > as a reverse proxy > > 1. Client sends HTTP request with Accept-Encoding as gzip > > 2. Nginx proxy forwards the request with the request > > header intact > > 3. Origin server sends a compressed response > > 4. At the nginx proxy, we *decompress* the response, apply > > transformations on the response body and then *again* > > compress it > > In other words, is there a way to use the functionality of gzip > > and gunzip modules simultaneously for a processing a response > > and in a particular order > > As of now, it's not possible without code modifications - mostly > because there is no way to tell gunzip filter you want it to > always decompress a response. It can be achieved with minor code > changes though. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Jan 20 14:50:51 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 20 Jan 2014 14:50:51 +0000 Subject: Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again In-Reply-To: References: <1390225230.357.YahooMailNeo@web193505.mail.sg3.yahoo.com> <20140120134852.GO1835@mdounin.ru> Message-ID: On 20 January 2014 14:34, Makailol Charls wrote: > Hello Maxim, > > Would you suggest the code change to achieve this? Instead of forking your own incompatible nginx version, I'd be tempted to test this out: Turn gzip on. Always remove the Accept-Encoding header from the proxied request. Perform the transformations on the uncompressed response. Let nginx gzip the content on the way back to the client. The ordering of the different modules you're using may spoil this idea, of course, but I'd give it a go myself. Cheers, Jonathan From reallfqq-nginx at yahoo.fr Mon Jan 20 17:43:02 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 20 Jan 2014 18:43:02 +0100 Subject: Decompressing a compressed response from upstream, applying transformations and then compressing for downstream again In-Reply-To: References: <1390225230.357.YahooMailNeo@web193505.mail.sg3.yahoo.com> <20140120134852.GO1835@mdounin.ru> Message-ID: Jonathan idea looks like a nice solution, because there is no modification of original nginx (good for updates and maintenance thus good for security). Always avoid breaking the update chain (thus diverting from original source, unless having another repository being reactive to - security - updates which you could pull from). However that means uncompressed traffic between backend and nginx proxy, thus: ++ traffic volume (memory) in backend interface compared to frontend interface -- CPU time on backend to compress data and on proxy to uncompress it Depending on your application, that could create a bottleneck. Maybe caching the compressed result on the proxy would help reducing backend work and traffic and as a result hope to limit the burden to it. My 2 cents, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 20 22:31:30 2014 From: nginx-forum at nginx.us (bidwell) Date: Mon, 20 Jan 2014 17:31:30 -0500 Subject: imap proxy limited to about 210 connections Message-ID: <3d5584d3e4eb4ae2f210dea8ebfabeb4.NginxMailingListEnglish@forum.nginx.org> I have nginx proxying imap and pop between 3 different backend servers, but it seems to be limited to about 210 concurrent connections. Requests beyond this get a connection timed out. I tried adding more worker processes but that didn't do anything. I have multi_accept on and have raised the number of worker_connections, but still no luck. I see rate limiting and connection bandwidth limiting, but these appear to apply to the http protocol and not the imap/pop protocol. What parameters to I adjust to increase the number of concurrent sessions to imap/pop? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246642,246642#msg-246642 From makailol7 at gmail.com Tue Jan 21 05:58:19 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 21 Jan 2014 11:28:19 +0530 Subject: Issue with multipart response compression. Message-ID: Hello I use Nginx/1.4.4 as a reverse proxy and my backend webserver generates multipart response with some dynamic boundary. I use nginx gzip module to send compress data to the client but it is unable to compress this multipart response which contains dynamic boundary in content_type. If I use gzip_type as below, it doesn't work. gzip_types 'multipart/mixed'; If I include boundary in gzip_type, it works fine but boundary is dynamic in my case. gzip_types 'multipart/mixed; boundary="Ajm,e3pN"' ; Can someone suggest solution for this? Thanks, Makailol -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 21 06:56:42 2014 From: nginx-forum at nginx.us (tilden18@gmail.com) Date: Tue, 21 Jan 2014 01:56:42 -0500 Subject: Help required in Setting up FTP Load Balancer in NGINX- Message-ID: I required help to configure FTP Load balancer in NGINX. Can you all please help me with the necessary steps or any link which explains the same. Note: My incoming FTP request will come in FTP protocol only. We cannot configure FTP ?over HTTP in our application? if NGINX is not supporting can you please suggest any load balancer which will server my purpose. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246645,246645#msg-246645 From makailol7 at gmail.com Tue Jan 21 08:29:19 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 21 Jan 2014 13:59:19 +0530 Subject: How to define dynamic proxy_cache_path directory. In-Reply-To: <20140120125657.GM1835@mdounin.ru> References: <20140120125657.GM1835@mdounin.ru> Message-ID: Hello Maxim, Thanks to reply on this . If we have single cache path (directory) for multiple sites, is there some way to clear(purge) all cached pages for particular site? Regards, Makailol On Mon, Jan 20, 2014 at 6:26 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jan 20, 2014 at 02:14:26PM +0530, Makailol Charls wrote: > > > Hello, > > > > I use Nginx/1.4.4 as a reverse proxy caching server for multiple sites. > So > > far I have been using same proxy_cache_path for all sites. Now I want to > > use separate cache path for all sites. > > > > Is there anyway to make proxy_cache_path dynamic i.e. using some variable > > in proxy_cache_path like $host ? > > No, proxy_cache_path must be known without a request for cache > loader and cache manager to work. Therefore it's not possible to > use variables in proxy_cache_path, you have to explicitly define > all caches you want to use. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrey.krieger.utkin at gmail.com Tue Jan 21 10:26:11 2014 From: andrey.krieger.utkin at gmail.com (Andrey Utkin) Date: Tue, 21 Jan 2014 12:26:11 +0200 Subject: Bounty for #416 In-Reply-To: References: Message-ID: 2014/1/14 Brian Davis : > Like the ticket creator says in the description, always serving cached > versions of pages would be extremely cool, so I wanted to let people know I > just offered a $500 bounty for http://trac.nginx.org/nginx/ticket/416 at > Bountysource. > > https://www.bountysource.com/issues/972735-proxy_cache_use_stale-run-updating-in-new-thread-and-serve-stale-data-to-all Is issue still actual? -- Andrey Utkin From contact at jpluscplusm.com Tue Jan 21 10:45:17 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 21 Jan 2014 10:45:17 +0000 Subject: Help required in Setting up FTP Load Balancer in NGINX- In-Reply-To: References: Message-ID: On 21 January 2014 06:56, tilden18 at gmail.com wrote: > I required help to configure FTP Load balancer in NGINX. > > Can you all please help me with the necessary steps or any link which > explains the same. > > Note: My incoming FTP request will come in FTP protocol only. We cannot > configure FTP ?over HTTP in our application? > > if NGINX is not supporting can you please suggest any load balancer which > will server my purpose. Wow - the dream of the 90s really *is* alive in $WHEREVER_YOU_ARE! ;-) Seriously - it's 2014. We have better alternatives than the insecure and awful mess that is FTP. Any company that thinks otherwise deserves all the pain that comes with FTP ... Anyway, Nginx doesn't talk FTP to the best of my knowledge. Whilst I'd normally suggest a TCP load balancer for this, FTP has certain properties which make it annoying to load balance that you have to take into account. This came up with after moment's googling. It might help: http://ben.timby.com/?page_id=210 Jonathan From mdounin at mdounin.ru Tue Jan 21 11:39:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 21 Jan 2014 15:39:27 +0400 Subject: imap proxy limited to about 210 connections In-Reply-To: <3d5584d3e4eb4ae2f210dea8ebfabeb4.NginxMailingListEnglish@forum.nginx.org> References: <3d5584d3e4eb4ae2f210dea8ebfabeb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140121113927.GT1835@mdounin.ru> Hello! On Mon, Jan 20, 2014 at 05:31:30PM -0500, bidwell wrote: > I have nginx proxying imap and pop between 3 different backend servers, but > it seems to be limited to about 210 concurrent connections. Requests beyond > this get a connection timed out. I tried adding more worker processes but > that didn't do anything. I have multi_accept on and have raised the number > of worker_connections, but still no luck. I see rate limiting and > connection bandwidth limiting, but these appear to apply to the http > protocol and not the imap/pop protocol. What parameters to I adjust to > increase the number of concurrent sessions to imap/pop? In nginx itself, tuning worker_connections should be enough. If it doesn't help, it indicate the problem is elsewhere - i.e. you either have to tune some system limit (try looking into error log to see if there is something there) or your backend servers. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jan 21 13:29:18 2014 From: nginx-forum at nginx.us (WheresWardy) Date: Tue, 21 Jan 2014 08:29:18 -0500 Subject: Nested location block for merging config returns 404? Message-ID: <70eab4d8124b34b7ec4bf829a395c7de.NginxMailingListEnglish@forum.nginx.org> I want a location to proxy to another service, and need to add an extra header for certain file types, but when trying to merge the configuration with a nested location instead of duplicating nginx instead returns a 404. For example, this configuration works: location ~ /(dir1/)?dir2/ { add_header X-My-Header value1; proxy_pass http://myproxy; } location ~ /(dir1/)?dir2/.*\.(txt|css) { add_header X-My-Static value2; add_header X-My-Header value1; proxy_pass http://myproxy; } Passing valid URLs to the above config works perfectly, and I'll get the extra header set if it's a txt or css file (examples for the sake of argument). However, what I want to accomplish is to merge these two blocks into one nested location block to save on the duplication, however when I do that I just get a 404 returned for the previously workings URLs: location ~ /(dir1/)?dir2/ { location \.(txt|css) { add_header X-My-Static value2; } add_header X-My-Header value1; proxy_pass http://myproxy; } Can location blocks actually be nested in this way? I'm wondering if the 404 is because it's only parsing the specific nested block, and doesn't fallback onto the remaining config underneath (and therefore never gets sent to the proxy, and it's nginx returning a 404 which would be expected for that URL). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246655,246655#msg-246655 From contact at jpluscplusm.com Tue Jan 21 13:35:24 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 21 Jan 2014 13:35:24 +0000 Subject: Nested location block for merging config returns 404? In-Reply-To: <70eab4d8124b34b7ec4bf829a395c7de.NginxMailingListEnglish@forum.nginx.org> References: <70eab4d8124b34b7ec4bf829a395c7de.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 21 January 2014 13:29, WheresWardy wrote: > I just get a 404 returned for the previously workings URLs: > > location ~ /(dir1/)?dir2/ { > location \.(txt|css) { > add_header X-My-Static value2; > } > add_header X-My-Header value1; > proxy_pass http://myproxy; > } > > Can location blocks actually be nested in this way? I believe they can be, staticprefix-inside-regex as you've done. The norm is to have regex-inside-staticprefix, and it's usually done for performance reasons. I suggest you've made an error in not marking your inner location as a regex, however, and this may be causing your 404s. Jonathan From nginx-forum at nginx.us Tue Jan 21 13:40:20 2014 From: nginx-forum at nginx.us (WheresWardy) Date: Tue, 21 Jan 2014 08:40:20 -0500 Subject: Nested location block for merging config returns 404? In-Reply-To: References: Message-ID: I don't think it's a regex issue, because if I add an additional proxy_pass inside the nested location block, I then get a valid request. Additionally, any headers set inside the outer location block don't appear, but then if I duplicate them inside the nested location block, they then appear (just the once). This basically means I'd have to duplicate anything in the outer location block in the nested one, which kind of defeats the purpose, and it's cleaner to have the separate same-level config. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246655,246657#msg-246657 From contact at jpluscplusm.com Tue Jan 21 13:48:22 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 21 Jan 2014 13:48:22 +0000 Subject: Nested location block for merging config returns 404? In-Reply-To: References: Message-ID: On 21 January 2014 13:40, WheresWardy wrote: > I don't think it's a regex issue, because if I add an additional proxy_pass > inside the nested location block, I then get a valid request. Additionally, > any headers set inside the outer location block don't appear, but then if I > duplicate them inside the nested location block, they then appear (just the > once). You may have multiple things to solve here. Your line location \.(txt|css) { looks like a regex, but you're not telling nginx it /is/ a regex. I wouldn't expect anything to work until you fix that. Secondly, the inheritance behaviours you're seeing are as I'd expect: 1) I don't believe proxy_pass statements are inherited, both as you're seeing from outer-location{} to inner-location{} but also from, for example, server{} to location{}. You need to define each proxy_pass where you want it to be used. 2) add_header (again, in my understanding) is a slightly different beast: it *is* inherited, outer to inner, but only if you *don't* re-use it in the inner location. As soon as you use it inside the inner location, all the inherited add_header directives are forgotten. Again - that's what I /think/ is the case - I welcome others to correct me if I'm misremembering ... Cheers, Jonathan From nginx-forum at nginx.us Tue Jan 21 13:59:48 2014 From: nginx-forum at nginx.us (WheresWardy) Date: Tue, 21 Jan 2014 08:59:48 -0500 Subject: Nested location block for merging config returns 404? In-Reply-To: References: Message-ID: > Your line > > location \.(txt|css) { > > looks like a regex, but you're not telling nginx it /is/ a regex. I > wouldn't expect anything to work until you fix that. Apologies, that was a typo during simplification of my config. I do indeed have a ~ in my nested location block, and it's definitely matching correctly. Everything else you've specified seems to fit my observed behaviour, so think it's just that my use case doesn't fit with the inheritance rules as they stand. Thanks for your help! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246655,246660#msg-246660 From contact at jpluscplusm.com Tue Jan 21 14:10:28 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 21 Jan 2014 14:10:28 +0000 Subject: Nested location block for merging config returns 404? In-Reply-To: References: Message-ID: On 21 January 2014 13:59, WheresWardy wrote: >> Your line >> >> location \.(txt|css) { >> >> looks like a regex, but you're not telling nginx it /is/ a regex. I >> wouldn't expect anything to work until you fix that. > > Apologies, that was a typo during simplification of my config. I do indeed > have a ~ in my nested location block, and it's definitely matching > correctly. > > Everything else you've specified seems to fit my observed behaviour, so > think it's just that my use case doesn't fit with the inheritance rules as > they stand. Thanks for your help! No problem. You could look at using a map to achieve the config deduplication you're aiming for. Something like this (typed, but not tested or syntax-checked!) ------------------------------------ http { map $uri $map_output { default ""; ~ \.txt "value2"; ~ \.css "value2"; } server { location ~ /(dir1/)?dir2/ { add_header X-My-Header value1; add_header X-My-Static $map_output; proxy_pass http://myproxy; } } } ------------------------------------ This will also have the effect of wiping out inbound X-My-Static headers, which you could get round by referencing them in the default map output if you really needed them to be passed through ... Jonathan From vbart at nginx.com Tue Jan 21 16:24:23 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 21 Jan 2014 20:24:23 +0400 Subject: Nested location block for merging config returns 404? In-Reply-To: <70eab4d8124b34b7ec4bf829a395c7de.NginxMailingListEnglish@forum.nginx.org> References: <70eab4d8124b34b7ec4bf829a395c7de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1816126.p8xVIqTJsZ@vbart-laptop> On Tuesday 21 January 2014 08:29:18 WheresWardy wrote: > I want a location to proxy to another service, and need to add an extra > header for certain file types, but when trying to merge the configuration > with a nested location instead of duplicating nginx instead returns a 404. > > For example, this configuration works: > > location ~ /(dir1/)?dir2/ { > add_header X-My-Header value1; > proxy_pass http://myproxy; > } > > location ~ /(dir1/)?dir2/.*\.(txt|css) { > add_header X-My-Static value2; > add_header X-My-Header value1; > proxy_pass http://myproxy; > } > > Passing valid URLs to the above config works perfectly, and I'll get the > extra header set if it's a txt or css file (examples for the sake of > argument). However, what I want to accomplish is to merge these two blocks > into one nested location block to save on the duplication, however when I do > that I just get a 404 returned for the previously workings URLs: > > location ~ /(dir1/)?dir2/ { > location \.(txt|css) { > add_header X-My-Static value2; > } > add_header X-My-Header value1; > proxy_pass http://myproxy; > } > > Can location blocks actually be nested in this way? I'm wondering if the 404 > is because it's only parsing the specific nested block, and doesn't fallback > onto the remaining config underneath (and therefore never gets sent to the > proxy, and it's nginx returning a 404 which would be expected for that URL). [..] Yes, they can. But, please note that the proxy_pass directive is not inherited, and you should duplicate it in your nested location block. There is a good article on the topic: http://blog.martinfjordvald.com/2012/08/understanding-the-nginx-configuration-inheritance-model/ wbr, Valentin V. Bartenev From rvrv7575 at yahoo.com Tue Jan 21 19:07:41 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Wed, 22 Jan 2014 03:07:41 +0800 (SGT) Subject: Rewriting GET request parameters while configured as a reverse proxy Message-ID: <1390331261.58248.YahooMailNeo@web193502.mail.sg3.yahoo.com> Hello Is there a way to make nginx rewrite the GET request parameters while configured as a reverse proxy. e.g. if nginx receives a request GET / foo.html?abc=123 , can nginx rewrite it to GET /foo.html?abc=456 (nginx admin specifies 123 to be changed to 456) and then do a proxy pass to the origin server.? I did a test run with using $args on the lines of? if($args~post=140){ rewrite ^ http://example.com/ permanent; } as explained at http://wiki.nginx.org/HttpRewriteModule. However, this seems to work only for when nginx is the web server. It tries to fetch the content from the local nginx html folder. Please provide inputs. ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rand at sent.com Tue Jan 21 19:30:29 2014 From: rand at sent.com (rand at sent.com) Date: Tue, 21 Jan 2014 11:30:29 -0800 Subject: proxy_redirect can't be def'd in an include file; ok in main config? Message-ID: <1390332629.21274.73604993.762F3A43@webmail.messagingengine.com> On Tue, Jan 21, 2014, at 11:10 AM, Maxim Dounin wrote: ... > And it's certainly a wrong list to write such questions. Thank > you for cooperation. ... > Hello! > > On Tue, Jan 21, 2014 at 10:55:58AM -0800, rand at sent.com wrote: > > > i've nginx 1.5.8 > > > > If I check a config containing > > > > cat sites/test.conf > > ... > > location / { > > proxy_pass http://VARNISH; > > include includes/varnish.conf; > > } > > ... > > cat includes/varnish.conf > > proxy_redirect default; > > proxy_connect_timeout 600s; > > proxy_read_timeout 600s; > > ... > > > > I get an error > > > > nginx: [emerg] "proxy_redirect default" should be placed after > > the "proxy_pass" directive in > > //etc/nginx/includes/varnish.conf:1 > > nginx: configuration file //etc/nginx/nginx.conf test failed > > > > but if I change to, > > > > cat sites/test.conf > > ... > > location / { > > proxy_pass http://VARNISH; > > + proxy_redirect default; > > include includes/varnish.conf; > > } > > ... > > cat includes/varnish.conf > > - proxy_redirect default; > > + #proxy_redirect default; > > proxy_connect_timeout 600s; > > proxy_read_timeout 600s; > > ... > > > > then config check returns > > > > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > > nginx: configuration file /etc/nginx/nginx.conf test is > > successful > > > > Why isn't the proxy_redirect viable in the include file? Intended, or a > > bug? > > Most likely, there are other uses of the "includes/varnish.conf" > include file in your config, and they cause error reported. No, there aren't. There is only one site enabled, and only one instance of includes/varnish.conf, in that site config. From bdavis at lakeshoreint.com Tue Jan 21 19:54:49 2014 From: bdavis at lakeshoreint.com (Brian Davis) Date: Tue, 21 Jan 2014 13:54:49 -0600 Subject: Bounty for #416 In-Reply-To: References: Message-ID: On Tue, Jan 21, 2014 at 4:26 AM, Andrey Utkin < andrey.krieger.utkin at gmail.com> wrote: > 2014/1/14 Brian Davis : > > Like the ticket creator says in the description, always serving cached > > versions of pages would be extremely cool, so I wanted to let people > know I > > just offered a $500 bounty for http://trac.nginx.org/nginx/ticket/416 at > > Bountysource. > > > > > https://www.bountysource.com/issues/972735-proxy_cache_use_stale-run-updating-in-new-thread-and-serve-stale-data-to-all > > Is issue still actual? > > -- > Andrey Utkin > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > As far as I know, it is. If there's a way to achieve the aim of this ticket with the current version of nginx, I would _love_ to know how to do it. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Jan 21 22:23:51 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 21 Jan 2014 23:23:51 +0100 Subject: Rewriting GET request parameters while configured as a reverse proxy In-Reply-To: <1390331261.58248.YahooMailNeo@web193502.mail.sg3.yahoo.com> References: <1390331261.58248.YahooMailNeo@web193502.mail.sg3.yahoo.com> Message-ID: It seems your syntax is obsolete. Have a look at http://nginx.org/en/docs/http/converting_rewrite_rules.html where it is explicitly written. It is also explicitely wriiten on the wiki page you visited that the resource is obsolete and that you should use http://nginx.org/en/docs/http/ngx_http_rewrite_module.html. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 22 01:04:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 05:04:38 +0400 Subject: proxy_redirect can't be def'd in an include file; ok in main config? In-Reply-To: <1390332629.21274.73604993.762F3A43@webmail.messagingengine.com> References: <1390332629.21274.73604993.762F3A43@webmail.messagingengine.com> Message-ID: <20140122010438.GD1835@mdounin.ru> Hello! On Tue, Jan 21, 2014 at 11:30:29AM -0800, rand at sent.com wrote: > > > On Tue, Jan 21, 2014, at 11:10 AM, Maxim Dounin wrote: > ... > > And it's certainly a wrong list to write such questions. Thank > > you for cooperation. > ... > > > Hello! > > > > On Tue, Jan 21, 2014 at 10:55:58AM -0800, rand at sent.com wrote: > > > > > i've nginx 1.5.8 > > > > > > If I check a config containing > > > > > > cat sites/test.conf > > > ... > > > location / { > > > proxy_pass http://VARNISH; > > > include includes/varnish.conf; > > > } > > > ... > > > cat includes/varnish.conf > > > proxy_redirect default; > > > proxy_connect_timeout 600s; > > > proxy_read_timeout 600s; > > > ... > > > > > > I get an error > > > > > > nginx: [emerg] "proxy_redirect default" should be placed after > > > the "proxy_pass" directive in > > > //etc/nginx/includes/varnish.conf:1 > > > nginx: configuration file //etc/nginx/nginx.conf test failed > > > > > > but if I change to, > > > > > > cat sites/test.conf > > > ... > > > location / { > > > proxy_pass http://VARNISH; > > > + proxy_redirect default; > > > include includes/varnish.conf; > > > } > > > ... > > > cat includes/varnish.conf > > > - proxy_redirect default; > > > + #proxy_redirect default; > > > proxy_connect_timeout 600s; > > > proxy_read_timeout 600s; > > > ... > > > > > > then config check returns > > > > > > nginx: the configuration file /etc/nginx/nginx.conf syntax is ok > > > nginx: configuration file /etc/nginx/nginx.conf test is > > > successful > > > > > > Why isn't the proxy_redirect viable in the include file? Intended, or a > > > bug? > > > > Most likely, there are other uses of the "includes/varnish.conf" > > include file in your config, and they cause error reported. > > No, there aren't. There is only one site enabled, and only one instance > of includes/varnish.conf, in that site config. Sorry, but there are no reasons to trust your words more than nginx code, at least till you provide full configuration to reproduce what you claim see. Please try checking again (an easy test is to just comment out this particular "include" directive, while preserving proxy_redirect line in the include file). If it doesn't help, please provide full configuration. -- Maxim Dounin http://nginx.org/ From emailbluedust at gmail.com Wed Jan 22 09:18:49 2014 From: emailbluedust at gmail.com (blue dust) Date: Wed, 22 Jan 2014 14:48:49 +0530 Subject: Trying to set-up a local development environment Message-ID: I am a complete newbie to the server side of things. My background has been installing wamp (http://www.wampserver.com/en/), dumping files into the www folder, and testing via localhost. It all just worked. Recently I switched to linux (archlinux) and am trying to set-up a local development environment. This is my project structure: www ??? process.php ??? css ? ??? registration.css ??? images ? ??? bg.png ? ??? bg_content.jpg ? ??? in_process.png ? ??? in_use.png ? ??? okay.png ? ??? status.gif ??? index.html ??? registration.html ??? thanks.html This is my nginx.conf: user http; worker_processes 1; events { worker_connections 1024; } http { server { listen 80; server_name localhost; location / { root /srv/www; index index.html; } } } Now here is where I am stuck and what I think nginx is doing according to my nginx.conf. 1. The root directive maps request to localhost/ to the local directory /srv/www in the filesystem and serves up index.html 2. But when I access localhost/registration.html, the html file loads OK, but the corresponding registration.css fails to take effect. When I checked it in chrome inspector, the GET request is getting a 304 Not Modified (even in incognito mode) or a 200 OK (from cache) What am I doing wrong? From mdounin at mdounin.ru Wed Jan 22 14:02:13 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 22 Jan 2014 18:02:13 +0400 Subject: nginx-1.5.9 Message-ID: <20140122140213.GK1835@mdounin.ru> Changes with nginx 1.5.9 22 Jan 2014 *) Change: now nginx expects escaped URIs in "X-Accel-Redirect" headers. *) Feature: the "ssl_buffer_size" directive. *) Feature: the "limit_rate" directive can now be used to rate limit responses sent in SPDY connections. *) Feature: the "spdy_chunk_size" directive. *) Feature: the "ssl_session_tickets" directive. Thanks to Dirkjan Bussink. *) Bugfix: the $ssl_session_id variable contained full session serialized instead of just a session id. Thanks to Ivan Risti?. *) Bugfix: nginx incorrectly handled escaped "?" character in the "include" SSI command. *) Bugfix: the ngx_http_dav_module did not unescape destination URI of the COPY and MOVE methods. *) Bugfix: resolver did not understand domain names with a trailing dot. Thanks to Yichun Zhang. *) Bugfix: alerts "zero size buf in output" might appear in logs while proxying; the bug had appeared in 1.3.9. *) Bugfix: a segmentation fault might occur in a worker process if the ngx_http_spdy_module was used. *) Bugfix: proxied WebSocket connections might hang right after handshake if the select, poll, or /dev/poll methods were used. *) Bugfix: the "xclient" directive of the mail proxy module incorrectly handled IPv6 client addresses. -- Maxim Dounin http://nginx.org/en/donation.html From thomas at glanzmann.de Wed Jan 22 15:14:40 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 22 Jan 2014 16:14:40 +0100 Subject: Implementing CONNECT in nginx Message-ID: <20140122151440.GB28425@glanzmann.de> Hello everyone, I would like to extend nginx with a CONNECT statement which connects to a TCP socket. Could someone walk me through which source files I need to modify and which fucntions I should have a look at? Or if there is anything else that can give me a quickstart? My use case is that I would like to share one tcp port between a webserver that I already have and a SSL VPN. The SSL VPN does the following: CONNECT /CSCOSSLC/tunnel HTTP/1.1 Host: lync.gmvl.de User-Agent: Cisco AnyConnect VPN Agent for Windows 3.0.07059 Cookie: webvpn=02F9D1 at 12288@188C at D7B405A4A46480CF364F1A6FD51998A0025DC727 X-CSTP-Version: 1 X-CSTP-Hostname: lenovo X-CSTP-MTU: 1306 X-CSTP-Address-Type: IPv6,IPv4 X-DTLS-Master-Secret: D40F07275F15A18F5872905B79FDAC4FD8C33EA13503DF29878C10FE6DA1D025B1128C66AB06E3EB1CEBBBFFF00CBC08 X-DTLS-CipherSuite: AES256-SHA:AES128-SHA:DES-CBC3-SHA:DES-CBC-SHA X-DTLS-Accept-Encoding: lzs X-CSTP-Accept-Encoding: lzs,deflate X-CSTP-Protocol: Copyright (c) 2004 Cisco Systems, Inc. References: http://www.infradead.org/ocserv/ http://article.gmane.org/gmane.network.vpn.openconnect.devel/1040 Cheers, Thomas From kworthington at gmail.com Wed Jan 22 16:53:18 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 22 Jan 2014 11:53:18 -0500 Subject: [nginx-announce] nginx-1.5.9 In-Reply-To: <20140122140218.GL1835@mdounin.ru> References: <20140122140218.GL1835@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.5.9 for Windows http://goo.gl/awceRm (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via my Twitter stream ( http://twitter.com/kworthington), if you prefer to receive updates that way. Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington On Wed, Jan 22, 2014 at 9:02 AM, Maxim Dounin wrote: > Changes with nginx 1.5.9 22 Jan > 2014 > > *) Change: now nginx expects escaped URIs in "X-Accel-Redirect" > headers. > > *) Feature: the "ssl_buffer_size" directive. > > *) Feature: the "limit_rate" directive can now be used to rate limit > responses sent in SPDY connections. > > *) Feature: the "spdy_chunk_size" directive. > > *) Feature: the "ssl_session_tickets" directive. > Thanks to Dirkjan Bussink. > > *) Bugfix: the $ssl_session_id variable contained full session > serialized instead of just a session id. > Thanks to Ivan Risti?. > > *) Bugfix: nginx incorrectly handled escaped "?" character in the > "include" SSI command. > > *) Bugfix: the ngx_http_dav_module did not unescape destination URI of > the COPY and MOVE methods. > > *) Bugfix: resolver did not understand domain names with a trailing > dot. > Thanks to Yichun Zhang. > > *) Bugfix: alerts "zero size buf in output" might appear in logs while > proxying; the bug had appeared in 1.3.9. > > *) Bugfix: a segmentation fault might occur in a worker process if the > ngx_http_spdy_module was used. > > *) Bugfix: proxied WebSocket connections might hang right after > handshake if the select, poll, or /dev/poll methods were used. > > *) Bugfix: the "xclient" directive of the mail proxy module incorrectly > handled IPv6 client addresses. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 22 18:31:50 2014 From: nginx-forum at nginx.us (AD7six) Date: Wed, 22 Jan 2014 13:31:50 -0500 Subject: Understanding location blocks and try files Message-ID: <8ecdee48197a43c1801d2ac44fb4b6f3.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to understand a problem I'm facing in a typical frontend-controller application. I've setup a test config with a single simple server [1], and ran a test script with debugging enabled to show what happens [2]. What confuses me is why this example is a 404: > curl -i http://nginx.dev/apples.json > HTTP/1.1 404 Not Found > Server: nginx/1.4.4 As can be seen in the log [3] there is an invalid response from /index.php. If I disable the location block adding cache headers for json files [4] though, the response is fine. Can someone shed some light as to why this happens? Is there a way to define location blocks for static files - without that causing problems for dynamic requests for the same url pattern? Any help appreciated, Cheers, AD [1] https://github.com/AD7six/server-configs-nginx [2] https://gist.github.com/AD7six/eafe7cc6fc655c3195c4 [3] https://gist.github.com/AD7six/eafe7cc6fc655c3195c4#file-error-log-L424 [4] https://github.com/AD7six/server-configs-nginx/blob/location-debug/h5bp/location/expires.conf#L12 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246713,246713#msg-246713 From contact at jpluscplusm.com Wed Jan 22 19:02:39 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 22 Jan 2014 19:02:39 +0000 Subject: Understanding location blocks and try files In-Reply-To: <8ecdee48197a43c1801d2ac44fb4b6f3.NginxMailingListEnglish@forum.nginx.org> References: <8ecdee48197a43c1801d2ac44fb4b6f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 22 January 2014 18:31, AD7six wrote: > What confuses me is why this example is a 404: > >> curl -i http://nginx.dev/apples.json >> HTTP/1.1 404 Not Found >> Server: nginx/1.4.4 > > As can be seen in the log [3] there is an invalid response from /index.php. > If I disable the location block adding cache headers for json files [4] > though, the response is fine. > > Can someone shed some light as to why this happens? I was half-way through writing a response when I noticed I'd slightly misunderstood part of what you wrote. I thought about going back and checking what I'd missed but to be brutally honest, the horrible maze of includes I'd had to work my way through once already put me /right/ off! Suggestions: 1) Have a very careful read through http://nginx.org/r/location and nginx.org/r/try_files. 2) Simplify your problem so you can present it inline to the list, without external links and without *all* those includes! You'll get significantly more interest in your problem that way, and may actually discover the solution yourself ;-) Cheers, Jonathan From nginx-forum at nginx.us Wed Jan 22 19:02:51 2014 From: nginx-forum at nginx.us (mnordhoff) Date: Wed, 22 Jan 2014 14:02:51 -0500 Subject: Logging $ssl_session_id can crash Nginx 1.5.9 worker Message-ID: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> I run the nginx.org mainline packages on Ubuntu 12.04, 32- and 64-bit. I use a wacky custom log format, and after 1.5.9 was released today, I enabled logging the $ssl_session_id variable. I later ran an SSL Labs SSL Server Test, [0] which makes numerous HTTPS requests of various sorts, and lo and behold, one of my worker processes core dumped. I fooled around with my configuration and determined that the problem was logging $ssl_session_id. If I don't log it, it's fine. If I enable a log that contains $ssl_session_id -- even only $ssl_session_id -- it crashes. Normal HTTPS requests -- well, I just tried curl and Firefox -- work fine. I notice that curl does log a session ID, but for Firefox that field is just a "-". I have no idea if that's an(other) Nginx bug or just a difference between the two clients, but it smells funny to me. I briefly enabled $ssl_session_id logging on 2013-12-17 with 1.5.7 or 1.5.8 and whatever Firefox version was current at the time and it did log he session data. (I never tried curl or SSL Labs then.) nginx.conf: (my logging nonsense is at the end) Good and bad log snippets at and pasted below. Bad: ==> /var/log/nginx/access13.log <== 173.203.79.216 36823 "www.mn9.us" 192.155.93.101 443 "mn9.us" 586 1 on TLSv1.2 - "-" [2014-01-22T18:05:54+00:00] "GET / HTTP/1.0" 200 975 612 - "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" "ECDHE-RSA-AES128-SHA" "-" 173.203.79.216 36876 "-" 192.155.93.101 443 "mn9.us" 594 1 on TLSv1 - "-" [2014-01-22T18:06:06+00:00] "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 400 0 0 - "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" "DHE-RSA-AES128-SHA" "-" ==> /var/log/nginx/error.log <== 2014/01/22 18:06:06 [notice] 24848#0: *595 SSL renegotiation disabled while reading client request headers, client: 173.203.79.216, server: mn9.us, request: "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 2014/01/22 18:06:07 [notice] 24681#0: signal 17 (SIGCHLD) received 2014/01/22 18:06:07 [alert] 24681#0: worker process 24848 exited on signal 11 (core dumped) 2014/01/22 18:06:07 [notice] 24681#0: start worker process 26865 2014/01/22 18:06:07 [notice] 24681#0: signal 29 (SIGIO) received Good: ==> /var/log/nginx/access11.log <== 173.203.79.216 60618 "www.mn9.us" 192.155.93.101 443 "mn9.us" 1003 1 on TLSv1.2 - "-" [2014-01-22T18:27:09+00:00] "GET / HTTP/1.0" 200 975 612 - "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" "ECDHE-RSA-AES128-SHA" 173.203.79.216 60701 "-" 192.155.93.101 443 "mn9.us" 1008 1 on TLSv1 - "-" [2014-01-22T18:27:22+00:00] "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 400 0 0 - "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" "DHE-RSA-AES128-SHA" ==> /var/log/nginx/error.log <== 2014/01/22 18:27:22 [notice] 27156#0: *1009 SSL renegotiation disabled while reading client request headers, client: 173.203.79.216, server: mn9.us, request: "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" ==> /var/log/nginx/access11.log <== 173.203.79.216 60948 "-" 192.155.93.101 443 "mn9.us" 1009 1 on TLSv1.2 - "-" [2014-01-22T18:27:22+00:00] "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 400 0 0 - "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" "(NONE)" [0] Cheers -- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246716,246716#msg-246716 From nginx-forum at nginx.us Wed Jan 22 19:54:35 2014 From: nginx-forum at nginx.us (AD7six) Date: Wed, 22 Jan 2014 14:54:35 -0500 Subject: Understanding location blocks and try files In-Reply-To: References: Message-ID: <83e6652f852a6ce67b069073f37cf9dc.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply, I've read through those sections again - if I'm missing something obvious I'm afraid I need someone to point it out to me :| Sorry about that I thought pointing at a working example would allow close scrutiny - didn't think to remove the files/config that weren't in use. Only the mentioned location block is relevant, inlining that and the fastcgi config becomes [1]:
server {
    listen 80;
    server_name nginx.dev *.nginx.dev;

    access_log /tmp/access.log;
    error_log  /tmp/error.log debug;

    error_page 404 /404.html;

    root /etc/nginx/www;
    index index.php index.html index.htm;

    try_files $uri $uri.html /index.php?$args;

    send_timeout 600s;

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_param   QUERY_STRING        $query_string;
        fastcgi_param   REQUEST_METHOD      $request_method;
        fastcgi_param   CONTENT_TYPE        $content_type;
        fastcgi_param   CONTENT_LENGTH      $content_length;

        fastcgi_param   SCRIPT_FILENAME     $request_filename;
        fastcgi_param   SCRIPT_NAME     $fastcgi_script_name;
        fastcgi_param   REQUEST_URI     $request_uri;
        fastcgi_param   DOCUMENT_URI        $document_uri;
        fastcgi_param   DOCUMENT_ROOT       $document_root;
        fastcgi_param   SERVER_PROTOCOL     $server_protocol;

        fastcgi_param   GATEWAY_INTERFACE   CGI/1.1;
        fastcgi_param   SERVER_SOFTWARE     nginx/$nginx_version;

        fastcgi_param   REMOTE_ADDR     $remote_addr;
        fastcgi_param   REMOTE_PORT     $remote_port;
        fastcgi_param   SERVER_ADDR     $server_addr;
        fastcgi_param   SERVER_PORT     $server_port;
        fastcgi_param   SERVER_NAME     $server_name;

        fastcgi_param   HTTPS           $https if_not_empty;

        # PHP only, required if PHP was built with
--enable-force-cgi-redirect
        fastcgi_param   REDIRECT_STATUS     200;
        fastcgi_pass unix:/tmp/php-fpm.sock;
        fastcgi_index index.php;
        fastcgi_intercept_errors on; # to support 404s for PHP files not
found
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_read_timeout 600s;
    }

    location ~* \.(?:manifest|appcache|html?|xml|json)$ {
      add_header section "expires.conf:13";
      expires -1;
    }
}
The behavior is identical to as originally described. A valid response where the url is a file: $ curl -i http://nginx.dev/foo.json HTTP/1.1 200 OK Server: nginx/1.4.4 Date: Wed, 22 Jan 2014 19:28:34 GMT Content-Type: application/json Content-Length: 20 Last-Modified: Wed, 22 Jan 2014 18:01:51 GMT Connection: keep-alive ETag: "52e0078f-14" Expires: Wed, 22 Jan 2014 19:28:33 GMT Cache-Control: no-cache section: expires.conf:13 Accept-Ranges: bytes An invalid response when passed to php: $ curl -i http://nginx.dev/apples.json HTTP/1.1 404 Not Found Server: nginx/1.4.4 Date: Wed, 22 Jan 2014 19:28:40 GMT Content-Type: text/html Content-Length: 8 Connection: keep-alive ETag: "52dffeed-8" OH DEAR Am I missing something obvious or falling for a common misunderstanding? Is there a way to define location blocks for static files - without that causing problems for dynamic requests for the same url pattern? Cheers, AD [1] https://github.com/AD7six/server-configs-nginx/blob/location-debug/sites-available/nginx.dev Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246713,246718#msg-246718 From ru at nginx.com Wed Jan 22 20:06:55 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 23 Jan 2014 00:06:55 +0400 Subject: Logging $ssl_session_id can crash Nginx 1.5.9 worker In-Reply-To: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> References: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140122200655.GF70529@lo0.su> On Wed, Jan 22, 2014 at 02:02:51PM -0500, mnordhoff wrote: > I run the nginx.org mainline packages on Ubuntu 12.04, 32- and 64-bit. I use > a wacky custom log format, and after 1.5.9 was released today, I enabled > logging the $ssl_session_id variable. I later ran an SSL Labs SSL Server > Test, [0] which makes numerous HTTPS requests of various sorts, and lo and > behold, one of my worker processes core dumped. I fooled around with my > configuration and determined that the problem was logging $ssl_session_id. > If I don't log it, it's fine. If I enable a log that contains > $ssl_session_id -- even only $ssl_session_id -- it crashes. The following patch fixes this: diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2509,6 +2509,10 @@ ngx_ssl_get_session_id(ngx_connection_t sess = SSL_get0_session(c->ssl->connection); + if (sess == NULL) { + return NGX_ERROR; + } + buf = sess->session_id; len = sess->session_id_length; > Normal HTTPS requests -- well, I just tried curl and Firefox -- work fine. I > notice that curl does log a session ID, but for Firefox that field is just a > "-". I have no idea if that's an(other) Nginx bug or just a difference > between the two clients, but it smells funny to me. I briefly enabled > $ssl_session_id logging on 2013-12-17 with 1.5.7 or 1.5.8 and whatever > Firefox version was current at the time and it did log he session data. (I > never tried curl or SSL Labs then.) > > nginx.conf: (my logging nonsense is at the > end) > > Good and bad log snippets at and pasted > below. > > Bad: > > ==> /var/log/nginx/access13.log <== > 173.203.79.216 36823 "www.mn9.us" 192.155.93.101 443 "mn9.us" 586 1 on > TLSv1.2 - "-" [2014-01-22T18:05:54+00:00] "GET / HTTP/1.0" 200 975 612 - "-" > "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" > "ECDHE-RSA-AES128-SHA" "-" > 173.203.79.216 36876 "-" 192.155.93.101 443 "mn9.us" 594 1 on TLSv1 - "-" > [2014-01-22T18:06:06+00:00] "HEAD > /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 400 0 0 - > "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" > "DHE-RSA-AES128-SHA" "-" > > ==> /var/log/nginx/error.log <== > 2014/01/22 18:06:06 [notice] 24848#0: *595 SSL renegotiation disabled while > reading client request headers, client: 173.203.79.216, server: mn9.us, > request: "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show > HTTP/1.0" > 2014/01/22 18:06:07 [notice] 24681#0: signal 17 (SIGCHLD) received > 2014/01/22 18:06:07 [alert] 24681#0: worker process 24848 exited on signal > 11 (core dumped) > 2014/01/22 18:06:07 [notice] 24681#0: start worker process 26865 > 2014/01/22 18:06:07 [notice] 24681#0: signal 29 (SIGIO) received > > Good: > > ==> /var/log/nginx/access11.log <== > 173.203.79.216 60618 "www.mn9.us" 192.155.93.101 443 "mn9.us" 1003 1 on > TLSv1.2 - "-" [2014-01-22T18:27:09+00:00] "GET / HTTP/1.0" 200 975 612 - "-" > "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" > "ECDHE-RSA-AES128-SHA" > 173.203.79.216 60701 "-" 192.155.93.101 443 "mn9.us" 1008 1 on TLSv1 - "-" > [2014-01-22T18:27:22+00:00] "HEAD > /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 400 0 0 - > "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" > "DHE-RSA-AES128-SHA" > > ==> /var/log/nginx/error.log <== > 2014/01/22 18:27:22 [notice] 27156#0: *1009 SSL renegotiation disabled while > reading client request headers, client: 173.203.79.216, server: mn9.us, > request: "HEAD /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show > HTTP/1.0" > > ==> /var/log/nginx/access11.log <== > 173.203.79.216 60948 "-" 192.155.93.101 443 "mn9.us" 1009 1 on TLSv1.2 - "-" > [2014-01-22T18:27:22+00:00] "HEAD > /?SSL_Labs_Renegotiation_Test=User_Agent_May_Not_Show HTTP/1.0" 400 0 0 - > "-" "SSL Labs (https://www.ssllabs.com/about/assessment.html)" "-" "(NONE)" > > [0] From francis at daoine.org Wed Jan 22 20:36:36 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Jan 2014 20:36:36 +0000 Subject: Understanding location blocks and try files In-Reply-To: <83e6652f852a6ce67b069073f37cf9dc.NginxMailingListEnglish@forum.nginx.org> References: <83e6652f852a6ce67b069073f37cf9dc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140122203636.GY19804@craic.sysops.org> On Wed, Jan 22, 2014 at 02:54:35PM -0500, AD7six wrote: Hi there, > location ~ \.php$ { > location ~* \.(?:manifest|appcache|html?|xml|json)$ { > A valid response where the url is a file: > > $ curl -i http://nginx.dev/foo.json > HTTP/1.1 200 OK > An invalid response when passed to php: > > $ curl -i http://nginx.dev/apples.json > HTTP/1.1 404 Not Found Why do you think that this request is passed to php? > Am I missing something obvious or falling for a common misunderstanding? One request is handled in one location. The request "/apples.json" is handled in the second location above. Which says (by omission of any other directive) "serve it from the file system or return 404". The error_page for 404 is also handled in that location, and the appropriate file is found and returned. > Is there a way to define location blocks for static files - without that > causing problems for dynamic requests for the same url pattern? "location" matches the uri. "static file" or "dynamic request" is irrelevant at that point. So "no"; but also "probably", depending on what exactly you want to do. (The general suggestion in nginx is to use prefix-match locations at the top level, not regex-match ones.) f -- Francis Daly francis at daoine.org From thomas at glanzmann.de Wed Jan 22 20:48:00 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Wed, 22 Jan 2014 21:48:00 +0100 Subject: Implementing CONNECT in nginx In-Reply-To: <20140122151440.GB28425@glanzmann.de> References: <20140122151440.GB28425@glanzmann.de> Message-ID: <20140122204800.GC3978@glanzmann.de> Hello, * Thomas Glanzmann [2014-01-22 16:15]: > I would like to extend nginx with a CONNECT statement which connects to > a TCP socket. Could someone walk me through which source files I need to > modify and which fucntions I should have a look at? to answer my own question. The websocket implementation. Diff between 1.3.12 and 1.3.13 comes very close to what I'm looking for. Cheers, Thomas From contact at jpluscplusm.com Wed Jan 22 21:07:13 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Wed, 22 Jan 2014 21:07:13 +0000 Subject: Understanding location blocks and try files In-Reply-To: <20140122203636.GY19804@craic.sysops.org> References: <83e6652f852a6ce67b069073f37cf9dc.NginxMailingListEnglish@forum.nginx.org> <20140122203636.GY19804@craic.sysops.org> Message-ID: On 22 January 2014 20:36, Francis Daly wrote: > On Wed, Jan 22, 2014 at 02:54:35PM -0500, AD7six wrote: > > Hi there, > >> location ~ \.php$ { >> location ~* \.(?:manifest|appcache|html?|xml|json)$ { > >> A valid response where the url is a file: >> >> $ curl -i http://nginx.dev/foo.json >> HTTP/1.1 200 OK > >> An invalid response when passed to php: >> >> $ curl -i http://nginx.dev/apples.json >> HTTP/1.1 404 Not Found > > Why do you think that this request is passed to php? I /believe/ AD is thinking along these lines: * I have a server{} level "try_files", which goes $uri, $uri/, /index.php?$args; * When file.json is present in the server{}-level root, it should be served (and is) due to try_files trying the "$uri" setting first; * When file.json is /missing/, the try_files setting should then result in nginx falling back to the php location, which AD then expects to do something meaningful with this request ... ... and its this last step which isn't working as expected. I don't quite have the explanation or docs to hand to say why this won't work, but this SO page seems to have an interestingly un-up-voted answer at the bottom of the page: http://stackoverflow.com/questions/13138318/nginx-try-files-outside-location "You are probably under the delusion that try_files on server level must work for every request. Not at all. Quite the contrary, it works only for requests that match no location blocks." I'd be really interested to get confirmation that that statement is unequivocally true! Jonathan From vbart at nginx.com Wed Jan 22 21:30:02 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 23 Jan 2014 01:30:02 +0400 Subject: Understanding location blocks and try files In-Reply-To: References: <20140122203636.GY19804@craic.sysops.org> Message-ID: <3427575.TAMzRVmlay@vbart-laptop> On Wednesday 22 January 2014 21:07:13 Jonathan Matthews wrote: [..] > ... and its this last step which isn't working as expected. I don't > quite have the explanation or docs to hand to say why this won't work, > but this SO page seems to have an interestingly un-up-voted answer at > the bottom of the page: > http://stackoverflow.com/questions/13138318/nginx-try-files-outside-location > > "You are probably under the delusion that try_files on server level > must work for every request. Not at all. Quite the contrary, it works > only for requests that match no location blocks." > > I'd be really interested to get confirmation that that statement is > unequivocally true! > Well, actually I wrote this answer. So, I can confirm. =) You may also find confirmations by Igor and Maxim by searching through the mailing list. There is a bit of history: http://mailman.nginx.org/pipermail/nginx/2009-March/010749.html http://mailman.nginx.org/pipermail/nginx/2011-June/027502.html http://mailman.nginx.org/pipermail/nginx/2012-June/034389.html Changes with nginx 0.7.44 23 Mar 2009 *) Feature: the "try_files" directive is now allowed on the server block level. but, personally I think it was a bad decision that eventually resulted to a lot of confusion around. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed Jan 22 21:34:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Jan 2014 01:34:51 +0400 Subject: Logging $ssl_session_id can crash Nginx 1.5.9 worker In-Reply-To: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> References: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140122213451.GT1835@mdounin.ru> Hello! On Wed, Jan 22, 2014 at 02:02:51PM -0500, mnordhoff wrote: > I run the nginx.org mainline packages on Ubuntu 12.04, 32- and 64-bit. I use > a wacky custom log format, and after 1.5.9 was released today, I enabled > logging the $ssl_session_id variable. I later ran an SSL Labs SSL Server > Test, [0] which makes numerous HTTPS requests of various sorts, and lo and > behold, one of my worker processes core dumped. I fooled around with my > configuration and determined that the problem was logging $ssl_session_id. > If I don't log it, it's fine. If I enable a log that contains > $ssl_session_id -- even only $ssl_session_id -- it crashes. Thanks for the report, the following patch should fix it: --- a/src/event/ngx_event_openssl.c +++ b/src/event/ngx_event_openssl.c @@ -2508,6 +2508,10 @@ ngx_ssl_get_session_id(ngx_connection_t SSL_SESSION *sess; sess = SSL_get0_session(c->ssl->connection); + if (sess == NULL) { + s->len = 0; + return NGX_OK; + } buf = sess->session_id; len = sess->session_id_length; > Normal HTTPS requests -- well, I just tried curl and Firefox -- work fine. I > notice that curl does log a session ID, but for Firefox that field is just a > "-". I have no idea if that's an(other) Nginx bug or just a difference > between the two clients, but it smells funny to me. I briefly enabled That's normal, session id is expected to be empty, e.g., if session tickets are used. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jan 22 21:48:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Jan 2014 01:48:58 +0400 Subject: Logging $ssl_session_id can crash Nginx 1.5.9 worker In-Reply-To: <20140122200655.GF70529@lo0.su> References: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> <20140122200655.GF70529@lo0.su> Message-ID: <20140122214857.GU1835@mdounin.ru> Hello! On Thu, Jan 23, 2014 at 12:06:55AM +0400, Ruslan Ermilov wrote: > On Wed, Jan 22, 2014 at 02:02:51PM -0500, mnordhoff wrote: > > I run the nginx.org mainline packages on Ubuntu 12.04, 32- and 64-bit. I use > > a wacky custom log format, and after 1.5.9 was released today, I enabled > > logging the $ssl_session_id variable. I later ran an SSL Labs SSL Server > > Test, [0] which makes numerous HTTPS requests of various sorts, and lo and > > behold, one of my worker processes core dumped. I fooled around with my > > configuration and determined that the problem was logging $ssl_session_id. > > If I don't log it, it's fine. If I enable a log that contains > > $ssl_session_id -- even only $ssl_session_id -- it crashes. > > The following patch fixes this: > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > --- a/src/event/ngx_event_openssl.c > +++ b/src/event/ngx_event_openssl.c > @@ -2509,6 +2509,10 @@ ngx_ssl_get_session_id(ngx_connection_t > > sess = SSL_get0_session(c->ssl->connection); > > + if (sess == NULL) { > + return NGX_ERROR; > + } > + > buf = sess->session_id; > len = sess->session_id_length; You were faster. :) I think that len = 0 + NGX_OK is better than NGX_ERROR here though, and also in line with other similar functions like ngx_ssl_get_[raw_]certificate(). -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed Jan 22 22:18:48 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 22 Jan 2014 22:18:48 +0000 Subject: Understanding location blocks and try files In-Reply-To: References: <83e6652f852a6ce67b069073f37cf9dc.NginxMailingListEnglish@forum.nginx.org> <20140122203636.GY19804@craic.sysops.org> Message-ID: <20140122221848.GZ19804@craic.sysops.org> On Wed, Jan 22, 2014 at 09:07:13PM +0000, Jonathan Matthews wrote: > On 22 January 2014 20:36, Francis Daly wrote: > > On Wed, Jan 22, 2014 at 02:54:35PM -0500, AD7six wrote: Hi there, > >> location ~ \.php$ { > >> location ~* \.(?:manifest|appcache|html?|xml|json)$ { > >> An invalid response when passed to php: > >> > >> $ curl -i http://nginx.dev/apples.json > >> HTTP/1.1 404 Not Found > > > > Why do you think that this request is passed to php? > > I /believe/ AD is thinking along these lines: > > * I have a server{} level "try_files", which goes $uri, $uri/, /index.php?$args; That is correct. > * When file.json is present in the server{}-level root, it should be > served (and is) due to try_files trying the "$uri" setting first; That is incorrect. Either check the debug log, or add something like return 200 "location#2\n"; to the second location{} to get a better idea of what really happens. > * When file.json is /missing/, the try_files setting should then > result in nginx falling back to the php location, That is incorrect. Check the debug log; or make a request for /file.nomatch to see that kind of behaviour. > "You are probably under the delusion that try_files on server level > must work for every request. Not at all. Quite the contrary, it works > only for requests that match no location blocks." > > I'd be really interested to get confirmation that that statement is > unequivocally true! The true documentation is in the directory called "src". All else is interpretations :-) But if you don't want to accept someone else's word on it, the debug log is usually reliable; and adding different "return" messages to each location may help you figure out what nginx is doing. (Especially when you realise that rewrite-module directives are special.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Jan 23 06:57:28 2014 From: nginx-forum at nginx.us (DenisM) Date: Thu, 23 Jan 2014 01:57:28 -0500 Subject: What is the right place for sub_filter? Message-ID: <2165acb0a5eaeb4e24be4799127d9833.NginxMailingListEnglish@forum.nginx.org> Hi! My site runs php-cgi (via fastcgi) & nginx 1.4.4. I need to replace www.domain refs to www1.domain ones. Where to place sub_filter - in server{}, or "location /", or "location ~ .php$",..? sub_filter www.domain www1.domain; sub_filter_types *; I tried all these places w/o success. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246736,246736#msg-246736 From smainklh at free.fr Thu Jan 23 09:15:21 2014 From: smainklh at free.fr (smainklh at free.fr) Date: Thu, 23 Jan 2014 10:15:21 +0100 (CET) Subject: Count errors by http code In-Reply-To: <355207777.636113113.1390468349461.JavaMail.root@zimbra23-e3.priv.proxad.net> Message-ID: <323018766.636122168.1390468521041.JavaMail.root@zimbra23-e3.priv.proxad.net> Hello all, I guess this is not the first time you hear this question?: I would like to know if there's a way to setup counters on http codes. I know this is feasable with some kind log analyser like splunk or logstash/kibana but what i want is something running on a standalone server. A module ? a script ? It would be wonderfull if i could use it with collectd :) Thank you, Smana From ru at nginx.com Thu Jan 23 10:54:01 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 23 Jan 2014 14:54:01 +0400 Subject: Logging $ssl_session_id can crash Nginx 1.5.9 worker In-Reply-To: <20140122214857.GU1835@mdounin.ru> References: <955759c41ce3ad89b48b7b447804d988.NginxMailingListEnglish@forum.nginx.org> <20140122200655.GF70529@lo0.su> <20140122214857.GU1835@mdounin.ru> Message-ID: <20140123105401.GA86030@lo0.su> On Thu, Jan 23, 2014 at 01:48:58AM +0400, Maxim Dounin wrote: > On Thu, Jan 23, 2014 at 12:06:55AM +0400, Ruslan Ermilov wrote: > > > On Wed, Jan 22, 2014 at 02:02:51PM -0500, mnordhoff wrote: > > > I run the nginx.org mainline packages on Ubuntu 12.04, 32- and 64-bit. I use > > > a wacky custom log format, and after 1.5.9 was released today, I enabled > > > logging the $ssl_session_id variable. I later ran an SSL Labs SSL Server > > > Test, [0] which makes numerous HTTPS requests of various sorts, and lo and > > > behold, one of my worker processes core dumped. I fooled around with my > > > configuration and determined that the problem was logging $ssl_session_id. > > > If I don't log it, it's fine. If I enable a log that contains > > > $ssl_session_id -- even only $ssl_session_id -- it crashes. > > > > The following patch fixes this: > > > > diff --git a/src/event/ngx_event_openssl.c b/src/event/ngx_event_openssl.c > > --- a/src/event/ngx_event_openssl.c > > +++ b/src/event/ngx_event_openssl.c > > @@ -2509,6 +2509,10 @@ ngx_ssl_get_session_id(ngx_connection_t > > > > sess = SSL_get0_session(c->ssl->connection); > > > > + if (sess == NULL) { > > + return NGX_ERROR; > > + } > > + > > buf = sess->session_id; > > len = sess->session_id_length; > > You were faster. :) > I think that len = 0 + NGX_OK is better than NGX_ERROR here > though, and also in line with other similar functions like > ngx_ssl_get_[raw_]certificate(). It's also consistent with the previous behavior, so I agree. From mehta.pankaj at gmail.com Thu Jan 23 11:17:42 2014 From: mehta.pankaj at gmail.com (Pankaj Mehta) Date: Thu, 23 Jan 2014 11:17:42 +0000 Subject: SSL behaviour with multiple server blocks for same port Message-ID: Hi, I am struggling to get any documented reference for my problem in nginx docs. Hope someone can help before I delve into nginx code: I want to have multiple server blocks for the https port 443, they will serve different hostnames. Each block will have it's own ssl configuration. For example: server { listen 443 ssl server_name blah.xyz.com ssl protocols TLSv1 ssl_ciphers AES256-SHA:RC4-SHA; ssl_certificate /test/site1.cer; ssl_certificate_key /test/site1.key; ... } server { listen 443 ssl server_name blah.xyz.com ssl protocols TLSv1 ssl_ciphers AES256-SHA:RC4-SHA; ssl_certificate /test/site2.cer; ssl_certificate_key /test/site2.key; ... } These blocks have different ssl certificates. I understand that if I enable SNI in nginx and the client supports it, then we have a predictable behaviour where nginx will use the correct ssl parameters from the server block corresponding to that hostname. But I have no idea which ssl config will be picked up when the client does not support SNI. Is it the one that comes first? Also is the behaviour when SNI is disabled in nginx similar to when SNI is enabled in nginx but client doesn't support it? Is there a way in nginx to dump the active configs for a port? Thanks Pankaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Thu Jan 23 11:19:13 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 23 Jan 2014 15:19:13 +0400 Subject: Count errors by http code In-Reply-To: <323018766.636122168.1390468521041.JavaMail.root@zimbra23-e3.priv.proxad.net> References: <323018766.636122168.1390468521041.JavaMail.root@zimbra23-e3.priv.proxad.net> Message-ID: <2158649.8ftiIrxWdK@vbart-laptop> On Thursday 23 January 2014 10:15:21 smainklh at free.fr wrote: > Hello all, > > I guess this is not the first time you hear this question : > I would like to know if there's a way to setup counters on http codes. > > I know this is feasable with some kind log analyser like splunk or logstash/kibana > but what i want is something running on a standalone server. > A module ? a script ? > > It would be wonderfull if i could use it with collectd :) > [..] This functionality is available in Nginx Plus. See: http://nginx.org/en/docs/http/ngx_http_status_module.html#status_zone (server_zones -> responses) wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu Jan 23 11:49:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 23 Jan 2014 15:49:04 +0400 Subject: SSL behaviour with multiple server blocks for same port In-Reply-To: References: Message-ID: <20140123114904.GW1835@mdounin.ru> Hello! On Thu, Jan 23, 2014 at 11:17:42AM +0000, Pankaj Mehta wrote: > Hi, > > I am struggling to get any documented reference for my problem in nginx > docs. Hope someone can help before I delve into nginx code: > > I want to have multiple server blocks for the https port 443, they will > serve different hostnames. Each block will have it's own ssl configuration. > For example: > > server { > listen 443 ssl > server_name blah.xyz.com > > ssl protocols TLSv1 > ssl_ciphers AES256-SHA:RC4-SHA; > ssl_certificate /test/site1.cer; > ssl_certificate_key /test/site1.key; > ... > } > > server { > listen 443 ssl > server_name blah.xyz.com > > ssl protocols TLSv1 > ssl_ciphers AES256-SHA:RC4-SHA; > ssl_certificate /test/site2.cer; > ssl_certificate_key /test/site2.key; > ... > } > > These blocks have different ssl certificates. I understand that if I enable > SNI in nginx and the client supports it, then we have a predictable > behaviour where nginx will use the correct ssl parameters from the server > block corresponding to that hostname. But I have no idea which ssl config > will be picked up when the client does not support SNI. Is it the one that > comes first? http://nginx.org/r/listen Quote: The default_server parameter, if present, will cause the server to become the default server for the specified address:port pair. If none of the directives have the default_server parameter then the first server with the address:port pair will be the default server for this pair. > Also is the behaviour when SNI is disabled in nginx similar to > when SNI is enabled in nginx but client doesn't support it? Yes. > Is there a way in nginx to dump the active configs for a port? No. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jan 23 11:55:04 2014 From: nginx-forum at nginx.us (AD7six) Date: Thu, 23 Jan 2014 06:55:04 -0500 Subject: Understanding location blocks and try files In-Reply-To: <20140122221848.GZ19804@craic.sysops.org> References: <20140122221848.GZ19804@craic.sysops.org> Message-ID: Hey, thanks for the responses! Indeed your input has been very insightful, some follow ups: @Jonathan/@Valentin Your analysis is about right, the linked prior posts on the mailing list were very informative. @Francis > > An invalid response when passed to php: > > >$ curl -i http://nginx.dev/apples.json > > HTTP/1.1 404 Not Found > Why do you think that this request is passed to php? Because I (mis)read the debug log =) (linked in the first post). Re-reading it I was looking at the log for a different response. > >Is there a way to define location blocks for static files - without that > >causing problems for dynamic requests for the same url pattern? > > "location" matches the uri. "static file" or "dynamic request" is irrelevant at that point. Allow me to rephrase: Is there a way to define rules for static files - without that causing problems for dynamic requests matching the same url pattern? Nginx docs are full of warnings about If is evil and advocates using try_files instead - yet doing that is not functionally/logically equivalent, which leaves me kind of stuck. > Either check the debug log, or add something like > return 200 "location#2\n"; I've been using `add_header section "file:line";` for this purpose - a useful technique imo. My conclusion at this time is that it's not possible to define rules for static files without adding `try_files` to all location blocks; or by using `if (!-f $request_filename) {` or by adding a nested location block e.g.: location ~ ^/(?:apple-touch-icon-precomposed[^/]*.png$|crossdomain.xml$|favicon.ico$|css/|files/|fonts/|img/|js/) { require static-files.conf; } I did also try using the front controller as a 404 handler which worked exactly how I am trying to get things to work _except_ of course everything from the front controller is a http 404. Is there a way to prevent the http response code being set..? Cheers, AD ---- Further info =) I feel I should give some context as to what I'm trying to do in general so here goes: I use nginx for everything, as do many of my peers and colleagues - we're typically backend developers who simply appreciate performance. However, I also use and contribute to http://html5boilerplate.com / https://github.com/h5bp/html5-boilerplate - which many frontend developers use as the basis for the projects. Most frontend developers are familiar with and use apache - their combined knowledge distilled into https://github.com/h5bp/html5-boilerplate/blob/master/.htaccess - knowledge which I'd like to apply by default to the use of nginx. There is an nginx version of the apache rules here https://github.com/h5bp/server-configs-nginx which I'd really like to get into a position to point at - right now it's kind of dis-functional in part because of the problem I'm trying to solve in this thread. One of the goals of asking here was to eventually achieve a "just include this config file" solution to optimize frontend performance. This post so far leads to the conclusion that's not possible and an application-specific config file is required for all apps. An example of the kind of apache rule I'd like to emulate with nginx is: Header set Access-Control-Allow-Origin "*" Here's a similar example as a further reference: http://www.servermom.org/setup-nginx-virtual-host-for-wordpress-with-wp-super-cache/262/ (I don't use wordpress but it serves the purpose as an example) The suggestion there is to include the following: # Cache static files for as long as possible location ~* .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { expires max; log_not_found off; access_log off; } Which would mean any request for those file types which is intended to be handled by wordpress - fails. it also means you can't optimize static requests _and_ do "funky caching" or similar by which I mean handling the first request for something that doesn't exist, and caching the result in a path such that the next and all subsequent requests are served by the webserver: References: <20140123114904.GW1835@mdounin.ru> Message-ID: <7A091C14-06D1-4FFE-847B-E3710B1B1C4F@postfach.slogh.com> > On Thu, Jan 23, 2014 at 11:17:42AM +0000, Pankaj Mehta wrote: Hi, > These blocks have different ssl certificates. I understand that if I enable > SNI in nginx and the client supports it, then we have a predictable > behaviour where nginx will use the correct ssl parameters from the server > block corresponding to that hostname. But I have no idea which ssl config One thing I became painfully aware of last time is that when you use SSL-enabled server blocks with SNI, a listen directive from one block may overwrite the listen directive from another one. For example, when I have: server { listen 443 ssl; server_name www.host1.com; ... } and server { listen 443 ssl spdy; server_name www.host2.com; ... } Even though the listen directive for server block of www.host1.com does not define SPDY, it accepts SPDY connections as well. In other words, if you want to disable SPDY, you'd have to make sure that it doesn't appear in any server block listen directive (assuming you're using SNI rather than dedicated IPs). I am not sure if this behavior can be avoided. nginx advertises spdy/2 via the NPN TLS extension. During the TLS handshake, would it be possible to first parse the hostname the client is attempting to connect (SNI), and only then decide whether to advertise SPDY via NPN or not depending on the hostname's listen directive? Alex From francis at daoine.org Fri Jan 24 01:41:44 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 24 Jan 2014 01:41:44 +0000 Subject: Understanding location blocks and try files In-Reply-To: References: <20140122221848.GZ19804@craic.sysops.org> Message-ID: <20140124014144.GC19804@craic.sysops.org> On Thu, Jan 23, 2014 at 06:55:04AM -0500, AD7six wrote: Hi there, > Allow me to rephrase: Is there a way to define rules for static files - > without that causing problems for dynamic requests matching the same url > pattern? Asking the question that way, the answer is probably "no". Whatever rules you write are based on url patterns (prefix or regex). nginx does not know whether files are involved until you tell it to look. If you want a completely generic (= arbitrary) uri/file layout, you probably won't find an nginx config you can drop in and have Just Work. > Nginx docs are full of warnings about If is evil and advocates using > try_files instead - yet doing that is not functionally/logically equivalent, > which leaves me kind of stuck. "If is evil" inside a location. You tested try_files outside a location. That might make a difference. > I did also try using the front controller as a 404 handler which worked > exactly how I am trying to get things to work _except_ of course everything > from the front controller is a http 404. Why "of course"? http://nginx.org/r/error_page Might the suggestions in and around http://forum.nginx.org/read.php?2,244818,244886 offer a hint to what might work for you? > One of the goals > of asking here was to eventually achieve a "just include this config file" > solution to optimize frontend performance. This post so far leads to the > conclusion that's not possible and an application-specific config file is > required for all apps. I suspect that that's pretty much required, based on the idea that generic = slow, specific = fast; and nginx being built to be fast. Also "optimize" can mean different things, depending on what specifically is to be improved at the expense of what else. > An example of the kind of apache rule I'd like to emulate with nginx is: > > > Header set Access-Control-Allow-Origin "*" > nginx does not have any equivalent to apache FilesMatch, which is approximately "when any filename which matches this pattern is served, apply this configuration as well". You may be able to get close, using nginx configuration. But it may not be close enough for you to be happy with. > The suggestion there is to include the following: > > # Cache static files for as long as possible > location ~* > .(ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ > { > expires max; log_not_found off; access_log off; > } > > Which would mean any request for those file types which is intended to be > handled by wordpress - fails. That configuration says "all requests that match these patterns (apart from those that match a different location) are files to be served from the filesystem". If you have a request that matches this location that is *not* a file to be served from the filesystem, that configuration is not the one you want to use. (Use, for example, "location ^~ /wordpress/" and a request for /wordpress/one.ogg will not match the regex location above.) (Or use "error_page 404" to decide what to do about "file" requests which are missing.) > it also means you can't optimize static requests _and_ do "funky caching" or > similar by which I mean handling the first request for something that > doesn't exist, and caching the result in a path such that the next and all > subsequent requests are served by the webserver: fastcgi_cache? Or try_files? I suspect I'm missing something obvious, but why can't you do this? If you want the effect of "expires max", why do you care whether you are serving from file, from cache, or from php directly? Only if it matters: could you describe how you want nginx to handle different requests, and how it should know to do what you want? Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Jan 24 04:33:56 2014 From: nginx-forum at nginx.us (MasterMind) Date: Thu, 23 Jan 2014 23:33:56 -0500 Subject: Images Aren't Displaying When Perl Interpreter Is Enabled In-Reply-To: References: Message-ID: <9fed241125ffa6b8bd9f0d843051cb77.NginxMailingListEnglish@forum.nginx.org> That's not really possible since the awstats is password protected. But find below an image of what im seeing and an image of what it should look like: Broken: https://dl.dropboxusercontent.com/u/18722727/awstats.png What it should look like: https://dl.dropboxusercontent.com/u/18722727/awstats2.png Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246500,246769#msg-246769 From nginx-forum at nginx.us Fri Jan 24 04:35:44 2014 From: nginx-forum at nginx.us (humank) Date: Thu, 23 Jan 2014 23:35:44 -0500 Subject: How can i compile nginx with specific lib linkage Message-ID: <4b40f3a9a8da23f6187360c686275b51.NginxMailingListEnglish@forum.nginx.org> I have customized a new nginx add on module,and the module use an external lib ( mytest.so). The method i called in mytest.so is kimtest method. How can i configure the ./configure file and Makefile to prevent the error " undefined reference to `kimtest' " Here is the configure file.. auto/configure --with-debug --with-ld-opt="-L/usr/local/lib64" --add-module=./src/my_nginx_module while configuring .. everything seems ok, no error occurs. but while making ... " undefined reference to `kimtest' " Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246770,246770#msg-246770 From nginx-forum at nginx.us Fri Jan 24 04:45:11 2014 From: nginx-forum at nginx.us (MasterMind) Date: Thu, 23 Jan 2014 23:45:11 -0500 Subject: Images Aren't Displaying When Perl Interpreter Is Enabled In-Reply-To: <9fed241125ffa6b8bd9f0d843051cb77.NginxMailingListEnglish@forum.nginx.org> References: <9fed241125ffa6b8bd9f0d843051cb77.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3871d7fec2148d85723f1c5018546319.NginxMailingListEnglish@forum.nginx.org> I removed the block referrer images section and index line from the config because i thought that might be causing it and it didnt fix it, but it did change the error message logs are showing (i have no idea how to fix this): 2014/01/23 23:39:07 [error] 5372#0: *31 rewrite or internal redirection cycle while internally redirecting to "/index.html", client: x.x.x.x, server: xx.firedaemon.com, request: "GET /icon/other/vv.png HTTP/1.1", host: "xx.firedaemon.com", referrer: "https://xx.firedaemon.com/awstats.pl?config=forums.firedaemon.com&framename=mainright" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246500,246771#msg-246771 From francis at daoine.org Fri Jan 24 08:58:41 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 24 Jan 2014 08:58:41 +0000 Subject: Images Aren't Displaying When Perl Interpreter Is Enabled In-Reply-To: <9fed241125ffa6b8bd9f0d843051cb77.NginxMailingListEnglish@forum.nginx.org> References: <9fed241125ffa6b8bd9f0d843051cb77.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140124085841.GD19804@craic.sysops.org> On Thu, Jan 23, 2014 at 11:33:56PM -0500, MasterMind wrote: Hi there, > That's not really possible since the awstats is password protected. I think the question wasn't intended to be "can I have access to your private site", but was rather "what is the url of an image request which does not respond the way you want it to". Your config has three location blocks: location / { location /icon/ { location ~ \.pl$ { Which one is used for the image request? Which one do you want to be used for the image request? And, which file on the filesystem corresponds to the image you are requesting? f -- Francis Daly francis at daoine.org From mehta.pankaj at gmail.com Fri Jan 24 11:22:49 2014 From: mehta.pankaj at gmail.com (Pankaj Mehta) Date: Fri, 24 Jan 2014 11:22:49 +0000 Subject: SSL behaviour with multiple server blocks for same port In-Reply-To: <20140123114904.GW1835@mdounin.ru> References: <20140123114904.GW1835@mdounin.ru> Message-ID: Thanks Maxim, very helpful. Pankaj On 23 January 2014 11:49, Maxim Dounin wrote: > Hello! > > On Thu, Jan 23, 2014 at 11:17:42AM +0000, Pankaj Mehta wrote: > > > Hi, > > > > I am struggling to get any documented reference for my problem in nginx > > docs. Hope someone can help before I delve into nginx code: > > > > I want to have multiple server blocks for the https port 443, they will > > serve different hostnames. Each block will have it's own ssl > configuration. > > For example: > > > > server { > > listen 443 ssl > > server_name blah.xyz.com > > > > ssl protocols TLSv1 > > ssl_ciphers AES256-SHA:RC4-SHA; > > ssl_certificate /test/site1.cer; > > ssl_certificate_key /test/site1.key; > > ... > > } > > > > server { > > listen 443 ssl > > server_name blah.xyz.com > > > > ssl protocols TLSv1 > > ssl_ciphers AES256-SHA:RC4-SHA; > > ssl_certificate /test/site2.cer; > > ssl_certificate_key /test/site2.key; > > ... > > } > > > > These blocks have different ssl certificates. I understand that if I > enable > > SNI in nginx and the client supports it, then we have a predictable > > behaviour where nginx will use the correct ssl parameters from the server > > block corresponding to that hostname. But I have no idea which ssl config > > will be picked up when the client does not support SNI. Is it the one > that > > comes first? > > http://nginx.org/r/listen > > Quote: > > The default_server parameter, if present, will cause the server to > become the default server for the specified address:port pair. If > none of the directives have the default_server parameter then the > first server with the address:port pair will be the default server > for this pair. > > > Also is the behaviour when SNI is disabled in nginx similar to > > when SNI is enabled in nginx but client doesn't support it? > > Yes. > > > Is there a way in nginx to dump the active configs for a port? > > No. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jan 24 14:43:55 2014 From: nginx-forum at nginx.us (Shohreh) Date: Fri, 24 Jan 2014 09:43:55 -0500 Subject: [OpenResty] How to start Nginx? Message-ID: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> Hello Using OpenResty, I compiled and installed a Lua-capable Nginx in /tmp so I could experiment with it before replacing the current Nginx that was installed through apt-get. However, since files are located in non-standard locations, Nginx can't find them: ==================================== /tmp/ngx_openresty-1.4.3.6/install/usr/local/openresty/nginx/sbin# ./nginx ./nginx: error while loading shared libraries: libluajit-5.1.so.2: cannot open shared object file: No such file or directory ==================================== FYI, here's what tmp/ngx_openresty-1.4.3.6/install/usr/local/openresty/ contains: ==================================== drwxr-xr-x 6 root root 4096 Jan 21 16:01 luajit/ drwxr-xr-x 5 root root 4096 Jan 21 16:03 lualib/ drwxr-xr-x 6 root root 4096 Jan 21 16:01 nginx/ ==================================== How should configure Debian so that Nginx find the files it needs? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246792,246792#msg-246792 From r at roze.lv Fri Jan 24 14:55:23 2014 From: r at roze.lv (Reinis Rozitis) Date: Fri, 24 Jan 2014 16:55:23 +0200 Subject: [OpenResty] How to start Nginx? In-Reply-To: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> References: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> Message-ID: <54D5C71BD8414A82822B87CAF55162DB@MasterPC> > ./nginx: error while loading shared libraries: libluajit-5.1.so.2: cannot > open shared object file: No such file or directory > How should configure Debian so that Nginx find the files it needs? You can always write 'ldd nginx' (also for every other executable on linux) to see where the librariers (*.so) should be located (it will indicate 'not found' and display path). rr From mdounin at mdounin.ru Fri Jan 24 15:48:42 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 24 Jan 2014 19:48:42 +0400 Subject: How can i compile nginx with specific lib linkage In-Reply-To: <4b40f3a9a8da23f6187360c686275b51.NginxMailingListEnglish@forum.nginx.org> References: <4b40f3a9a8da23f6187360c686275b51.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140124154842.GP1835@mdounin.ru> Hello! On Thu, Jan 23, 2014 at 11:35:44PM -0500, humank wrote: > I have customized a new nginx add on module,and the module use an external > lib ( mytest.so). > The method i called in mytest.so is kimtest method. > How can i configure the ./configure file and Makefile to prevent the error " > undefined reference to `kimtest' " > > Here is the configure file.. > > auto/configure --with-debug --with-ld-opt="-L/usr/local/lib64" > --add-module=./src/my_nginx_module > > while configuring .. everything seems ok, no error occurs. > but while making ... " undefined reference to `kimtest' " Something like CORE_LIBS="$CORE_LIBS -lmytest" in your module config file should do the trick. -- Maxim Dounin http://nginx.org/ From jim at ohlste.in Fri Jan 24 15:52:05 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 24 Jan 2014 10:52:05 -0500 Subject: [OpenResty] How to start Nginx? In-Reply-To: <54D5C71BD8414A82822B87CAF55162DB@MasterPC> References: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> <54D5C71BD8414A82822B87CAF55162DB@MasterPC> Message-ID: <52E28C25.9050000@ohlste.in> On 1/24/14, 9:55 AM, Reinis Rozitis wrote: >> ./nginx: error while loading shared libraries: libluajit-5.1.so.2: >> cannot open shared object file: No such file or directory >> How should configure Debian so that Nginx find the files it needs? > > You can always write 'ldd nginx' (also for every other executable on > linux) to see where the librariers (*.so) should be located (it will > indicate 'not found' and display path). > Perhaps Yichun Zhang (agentzh) will have a simpler approach, but you can also consider setting up a virtual machine using VirtualBox or some other program, and testing it that way. Once you are convinced that it behaves as expected for your use case, you can install it and try it in production. -- Jim Ohlstein From nginx-forum at nginx.us Fri Jan 24 16:18:07 2014 From: nginx-forum at nginx.us (AD7six) Date: Fri, 24 Jan 2014 11:18:07 -0500 Subject: Understanding location blocks and try files In-Reply-To: <20140124014144.GC19804@craic.sysops.org> References: <20140124014144.GC19804@craic.sysops.org> Message-ID: <6575837190be9f1669eab6089f553dd8.NginxMailingListEnglish@forum.nginx.org> > > I did also try using the front controller as a 404 handler which worked > > exactly how I am trying to get things to work _except_ of course everything > > from the front controller is a http 404. > Why "of course"? > http://nginx.org/r/error_page I said "of course" as when processed as a 404 - it didn't seem possible to modify the http response code. So all content was served as desired BUT as a 404, however I'd missed this in the docs: > If an error response is processed by a proxied server or a FastCGI server, and the server may return different response codes (e.g., 200, 302, 401 or 404), it is possible to respond with the code it returns: > error_page 404 = /404.php; Therefore using the following config: server { listen 80; server_name nginx.dev *.nginx.dev; try_files $uri $uri.html; error_page 404 = @app; <- root /etc/nginx/www; index index.php index.html index.htm; send_timeout 600s; location @app { fastcgi_param QUERY_STRING $query_string; ... add_header section "app"; <- } location ~* \.(?:manifest|appcache|html?|xml|json)$ { expires -1; add_header section "static json files"; <- } } Gives the folllowing results: $ curl -I http://nginx.dev/doesnotexist.json HTTP/1.1 200 OK Server: nginx/1.4.4 Date: Fri, 24 Jan 2014 15:39:19 GMT Content-Type: text/html Connection: keep-alive Vary: Accept-Encoding section: app <- $ curl -I http://nginx.dev/static.json HTTP/1.1 200 OK Server: nginx/1.4.4 Date: Fri, 24 Jan 2014 15:40:25 GMT Content-Type: application/json Content-Length: 20 Last-Modified: Wed, 22 Jan 2014 18:01:51 GMT Connection: keep-alive ETag: "52e0078f-14" Expires: Fri, 24 Jan 2014 15:40:24 GMT Cache-Control: no-cache section: static json files <- Accept-Ranges: bytes Which is _exactly_ what I'm aiming for, it means it's possible to define rules for static files without the risk of dynamically served content with the same url pattern being inaccessible. > I suspect that that's pretty much required, based on the idea that generic = slow, specific = fast; and nginx being built to be fast. Optimizing frontend performance is pretty easy, doesn't require detailed analysis and has signifiant benefits (especially for mobile devices). To a certain extent it doesn't matter if the webserver is fast if it is having to serve more work because of unoptimized frontend performance (or though the absense of other headers meaning the application simply doesn't work). It should be an obvious and easy step to optimize the serving of an application's static files rather than (as it had seemed right up until your last post) an arduous cumbersome fragile process which can lead to applications breaking. It doesn't matter if an image is /here.png /over/here.png or /way/way/over/here.png - if it's a static file it should be served with appropriate cache headers. needing to wrap rules for static files in all locations where they may occur is in many cases not viable - since even with a structured application the files may be in sufficiently varied locations to always overlap with dynamic requests. > I suspect I'm missing something obvious, but why can't you do this? If you want the effect of "expires max", why do you care whether you are serving from file, from cache, or from php directly? Serving files with php is slower, depending on exactly what it's doing possibly a lot slower, than letting the webserver do it. At the very least it means reimplementing cache header handling and partial responses in the application layer. This is not a point to be taken lightly - I'd go as far as to say doing that when you don't need to is a flat out bad idea. Here is an example where the same url may or may not be a static file http://book.cakephp.org/2.0/en/views/themes.html#increasing-performance-of-plugin-and-theme-assets That's a pattern which is used by many (most?) web frameworks - whereby static files are used if they exist but handled transparently if they do not. If it could only be static OR dynamic that would mean one of: 1) The file must always be served by php with specific headers. 2) it is impossible to generate the file dynamically, but if it exists it has specific headers. 3) It is possible to generate the file dynamically, but will have no specific headers once generated. 4) Additional server config is required to cache the response from php (hadn't really thought of that - but that's quite cumbersome) None of which are attractive. But now, that's all moot =). Thanks very much for your help identifying that a dynamic response can be used as a 404 handler _and_ define the response code. I may propose a change to make that more obvious to future readers/users. Regards, AD Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246713,246797#msg-246797 From semenukha at gmail.com Fri Jan 24 18:59:12 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Fri, 24 Jan 2014 13:59:12 -0500 Subject: [OpenResty] How to start Nginx? In-Reply-To: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> References: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7440330.k7X8TbizWA@tornado> Also, in Debian you can use apt-file to find the necessary lib: aptitude install apt-file && apt-file update apt-file search libluajit or apt-file search -x 'libluajit.*\.so$' On Friday, January 24, 2014 09:43:55 AM Shohreh wrote: > Hello > > Using OpenResty, I compiled and installed a Lua-capable Nginx in /tmp so I > could experiment with it before replacing the current Nginx that was > installed through apt-get. > > However, since files are located in non-standard locations, Nginx can't find > them: > ==================================== > /tmp/ngx_openresty-1.4.3.6/install/usr/local/openresty/nginx/sbin# ./nginx > ./nginx: error while loading shared libraries: libluajit-5.1.so.2: cannot > open shared object file: No such file or directory > ==================================== > > FYI, here's what tmp/ngx_openresty-1.4.3.6/install/usr/local/openresty/ > contains: > ==================================== > drwxr-xr-x 6 root root 4096 Jan 21 16:01 luajit/ > drwxr-xr-x 5 root root 4096 Jan 21 16:03 lualib/ > drwxr-xr-x 6 root root 4096 Jan 21 16:01 nginx/ > ==================================== > > How should configure Debian so that Nginx find the files it needs? > > Thank you. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246792,246792#msg-246792 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Best regards, Styopa Semenukha. From agentzh at gmail.com Fri Jan 24 19:48:10 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Fri, 24 Jan 2014 11:48:10 -0800 Subject: [OpenResty] How to start Nginx? In-Reply-To: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> References: <0ba222b9b40586f755b591e4785b1610.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Fri, Jan 24, 2014 at 6:43 AM, Shohreh wrote: > Using OpenResty, I compiled and installed a Lua-capable Nginx in /tmp so I > could experiment with it before replacing the current Nginx that was > installed through apt-get. > > However, since files are located in non-standard locations, Nginx can't find > them: You should use the command ./configure --prefix=/tmp/openresty --with-luajit to build your openresty. You cannot move already installed openresty directory tree to other places because we use RPATH in the "nginx" executable file's header to locate dynamic libraries (like the libluajit-5.1.so.2 file). Regards, -agentzh From jeroen.ooms at stat.ucla.edu Fri Jan 24 20:45:35 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Fri, 24 Jan 2014 12:45:35 -0800 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) Message-ID: I use nginx to cache both GET and POST requests. I want to use proxy_cache_bypass to allow users to bypass the cache, but ONLY for GET requests. POST requests should always be cached. I tried this: map $request_method $is_get { default: ""; GET "true"; } proxy_cache_methods POST; proxy_cache_bypass $http_cache_control $is_get; However, this bypasses the cache when either $http_cache_control OR $is_get is set. How can I achieve to set proxy_cache_bypass when both http_cache_control AND $is_get are set? From reallfqq-nginx at yahoo.fr Fri Jan 24 21:04:45 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 24 Jan 2014 22:04:45 +0100 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) In-Reply-To: References: Message-ID: Hello, On Fri, Jan 24, 2014 at 9:45 PM, Jeroen Ooms wrote: > However, this bypasses the cache when either $http_cache_control OR > $is_get is set. How can I achieve to set proxy_cache_bypass when both > http_cache_control AND $is_get are set? > ?The logic you wish imply using a single variable in proxy_cache_bypass which is set if and only if both $http_cache_control and $_is_get are set. Does the following ?work? map $request_method $is_get { default: ""; GET "true"; } set $bypass "true"; # Defaults to true # Each map then deactivates the $bypass variable if one # of the variables linked with the AND logic is empty map $http_cache_control $bypass { default: ""; "": $bypass; } map $is_get $bypass { default: ""; "": $bypass; } proxy_cache_methods POST; ?????proxy_cache_bypass $bypass; --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen.ooms at stat.ucla.edu Fri Jan 24 23:33:39 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Fri, 24 Jan 2014 15:33:39 -0800 Subject: proxy_cache_methods OPTIONS; Message-ID: Is it possible to cache the OPTIONS method? This pages gives exactly that example: http://www.packtpub.com/article/nginx-proxy proxy_cache_methods OPTIONS; However, when I try this, nginx writes in the error log: [warn] 7243#0: invalid value "OPTIONS" in ... From jeroen.ooms at stat.ucla.edu Sat Jan 25 02:40:27 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Fri, 24 Jan 2014 18:40:27 -0800 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) In-Reply-To: References: Message-ID: On Fri, Jan 24, 2014 at 1:04 PM, B.R. wrote: > Does the following work? This looks like a fragile solution. You're basically simulating an "if", but I don't think we should assume that nginx will resolve all maps in the defined order, as would be using "if". The nginx documentation for HttpMapModule says: "The map directive creates the variable, but only performs the mapping operation when (and if) the variable is accessed." In your solution, $bypass is already set to "true" a-priori, and also defined in two maps. I doubt nginx will resolve those maps, in the right order, to arrive at the desired value of $bypass. Maybe someone from the nginx team can comment if this is a viable solution? From appa at perusio.net Sat Jan 25 02:57:41 2014 From: appa at perusio.net (=?ISO-8859-1?Q?Ant=F3nio_P=2E_P=2E_Almeida?=) Date: Sat, 25 Jan 2014 03:57:41 +0100 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) In-Reply-To: References: Message-ID: map $http_cache_control$request_method $no_cache { default 0; ~^.+GET$ 1; } proxy_cache_methods POST; proxy_cache_bypass $no_cache; proxy_no_cache $no_cache; ----appa ---------- Forwarded message ---------- From: Jeroen Ooms Date: Fri, Jan 24, 2014 at 9:45 PM Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) To: nginx at nginx.org I use nginx to cache both GET and POST requests. I want to use proxy_cache_bypass to allow users to bypass the cache, but ONLY for GET requests. POST requests should always be cached. I tried this: map $request_method $is_get { default: ""; GET "true"; } proxy_cache_methods POST; proxy_cache_bypass $http_cache_control $is_get; However, this bypasses the cache when either $http_cache_control OR $is_get is set. How can I achieve to set proxy_cache_bypass when both http_cache_control AND $is_get are set? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Jan 25 03:03:32 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 25 Jan 2014 04:03:32 +0100 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) In-Reply-To: References: Message-ID: Hello, On Sat, Jan 25, 2014 at 3:40 AM, Jeroen Ooms wrote: > This looks like a fragile solution. You're basically simulating an > "if", but I don't think we should assume that nginx will resolve all > maps in the defined order, as would be using "if". > > ?*snip* > > Maybe someone from the nginx team can comment if this is a viable solution? > ?You explicited clearly you wanted to avoid if/and logic in your message subject (one could wonder why since there appears to be no other trivial solution)... In the end, since proxy_cache_bypass doc clearly state that it works based on an OR logic, what you wish won't happen magically. With those ?conditions set, I hardly see something that won't look edgy... ?Maybe someone else could help you better.? Good luck, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From someukdeveloper at gmail.com Sat Jan 25 04:03:51 2014 From: someukdeveloper at gmail.com (Some Developer) Date: Sat, 25 Jan 2014 04:03:51 +0000 Subject: X-Frame-Options: Nginx includes header twice Message-ID: <52E337A7.7000605@googlemail.com> I'm running Nginx 1.4.4 on Ubuntu 12.04 and have added the X-Frame-Options header for one of my sites but in testing it appears that Nginx includes this itself in addition to user configured headers. Basically I want X-Frame-Options to be DENY but when I set that header Nginx also sends an X-Frame-Options SAMEORIGIN header so that there are two X-Frame-Options headers in every request. Is there some way to disable the extra header? I can't find anything in my configuration that would add the second header. -------------- next part -------------- An HTML attachment was scrubbed... URL: From artemrts at ukr.net Sat Jan 25 07:42:09 2014 From: artemrts at ukr.net (wishmaster) Date: Sat, 25 Jan 2014 09:42:09 +0200 Subject: proxy_cache_methods OPTIONS; In-Reply-To: References: Message-ID: <1390635644.224025557.yfo6sojq@frv34.ukr.net> I think you should use official site. http://nginx.org/en/docs/http/ngx_http_proxy_module.html What is your proxy_cache_methods value? --- Original message --- From: "Jeroen Ooms" Date: 25 January 2014, 01:34:11 > Is it possible to cache the OPTIONS method? This pages gives exactly > that example: http://www.packtpub.com/article/nginx-proxy > > proxy_cache_methods OPTIONS; > > However, when I try this, nginx writes in the error log: > > [warn] 7243#0: invalid value "OPTIONS" in ... > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From artemrts at ukr.net Sat Jan 25 07:51:25 2014 From: artemrts at ukr.net (wishmaster) Date: Sat, 25 Jan 2014 09:51:25 +0200 Subject: X-Frame-Options: Nginx includes header twice In-Reply-To: <52E337A7.7000605@googlemail.com> References: <52E337A7.7000605@googlemail.com> Message-ID: <1390635840.744859415.wjikci0j@frv34.ukr.net> --- Original message --- From: "Some Developer" Date: 25 January 2014, 06:04:10 > I'm running Nginx 1.4.4 on Ubuntu 12.04 and have added the X-Frame-Options header for one of my sites but in testing it appears that Nginx includes this itself in addition to user configured headers. Basically I want X-Frame-Options to be DENY but when I set that header Nginx also sends an X-Frame-Options SAMEORIGIN header so that there are two X-Frame-Options headers in every request. > > Is there some way to disable the extra header? I can't find anything in my configuration that would add the second header. May by this is the header, has been set by your php-application? You can remove this with help of module http://wiki.nginx.org/HttpHeadersMoreModule From kolbyjack at gmail.com Sat Jan 25 13:24:25 2014 From: kolbyjack at gmail.com (Jonathan Kolb) Date: Sat, 25 Jan 2014 08:24:25 -0500 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) In-Reply-To: References: Message-ID: You can chain two maps to get a logical and: map $request_method $is_get { default 0; GET 1; } map $http_cache_bypass $bypass_cache { default $is_get; "" 0; } proxy_cache_methods POST; proxy_cache_bypass $bypass_cache; # note the lack of : after default in the maps, it's incorrect to have it there like your original map did On Fri, Jan 24, 2014 at 10:03 PM, B.R. wrote: > Hello, > > On Sat, Jan 25, 2014 at 3:40 AM, Jeroen Ooms wrote: > >> This looks like a fragile solution. You're basically simulating an >> "if", but I don't think we should assume that nginx will resolve all >> maps in the defined order, as would be using "if". >> >> *snip* >> >> Maybe someone from the nginx team can comment if this is a viable >> solution? >> > > You explicited clearly you wanted to avoid if/and logic in your message > subject (one could wonder why since there appears to be no other trivial > solution)... > In the end, since proxy_cache_bypass doc clearly state that it works based > on an OR logic, what you wish won't happen magically. > > With those conditions set, I hardly see something that won't look edgy... > > Maybe someone else could help you better. > Good luck, > --- > *B. R.* > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From snafu at live.de Sat Jan 25 22:22:19 2014 From: snafu at live.de (Lars) Date: Sat, 25 Jan 2014 23:22:19 +0100 Subject: Nginx and cgit - upstream prematurely closed FastCGI stdout Message-ID: I'm trying to setup cgit 0.10 with nginx 1.2.1-2.2 and fastcgi 1.0.3-3. Unfortunately the reponse is a 502. The following message is written in the error.log: [error] 30956#0: *1 upstream prematurely closed FastCGI stdout while reading response header from upstream, client: **, server: **, request: "GET / HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: "**" My nginx site is configured as follows: server { ... root /var/www/cgit/; proxy_redirect off; location ~* ^.+\.(css|png|ico)$ { expires 30d; } location / { include fastcgi_params; fastcgi_param SCRIPT_FILENAME /var/www/cgit; fastcgi_pass unix:/var/run/fcgiwrap.socket; fastcgi_param PATH_INFO $uri; fastcgi_param QUERY_STRING $args; } } Does anybody have an idea, what is going wrong? I also tried to raise the timeout limit, but I have no success. Thanks! snafu -------------- next part -------------- An HTML attachment was scrubbed... URL: From jeroen.ooms at stat.ucla.edu Sun Jan 26 03:25:55 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Sat, 25 Jan 2014 19:25:55 -0800 Subject: Using 2 intersection of conditions for proxy_cache_bypass (avoiding logical if/and) In-Reply-To: References: Message-ID: On Sat, Jan 25, 2014 at 5:24 AM, Jonathan Kolb wrote: > You can chain two maps to get a logical and: Thank you, this is precisely what I needed. > # note the lack of : after default in the maps, it's incorrect to have it > there like your original map did Good catch, thanks. Appreciate it. From jeroen.ooms at stat.ucla.edu Sun Jan 26 03:27:17 2014 From: jeroen.ooms at stat.ucla.edu (Jeroen Ooms) Date: Sat, 25 Jan 2014 19:27:17 -0800 Subject: proxy_cache_methods OPTIONS; In-Reply-To: <1390635644.224025557.yfo6sojq@frv34.ukr.net> References: <1390635644.224025557.yfo6sojq@frv34.ukr.net> Message-ID: On Fri, Jan 24, 2014 at 11:42 PM, wishmaster wrote: > What is your proxy_cache_methods value? I tried both proxy_cache_methods OPTIONS; as well as proxy_cache_methods GET HEAD OPTIONS; but both gave the error. From artemrts at ukr.net Sun Jan 26 10:07:23 2014 From: artemrts at ukr.net (wishmaster) Date: Sun, 26 Jan 2014 12:07:23 +0200 Subject: proxy_cache_methods OPTIONS; In-Reply-To: References: <1390635644.224025557.yfo6sojq@frv34.ukr.net> Message-ID: <1390730713.591089067.lz8yc0hv@frv34.ukr.net> --- Original message --- From: "Jeroen Ooms" Date: 26 January 2014, 05:27:46 > On Fri, Jan 24, 2014 at 11:42 PM, wishmaster wrote: > > What is your proxy_cache_methods value? > > I tried both > > proxy_cache_methods OPTIONS; > Because "OPTIONS" means any of this values: GET, HEAD,POST, etc. The HTTP method like OPTIONS is absent. From artemrts at ukr.net Sun Jan 26 10:10:17 2014 From: artemrts at ukr.net (wishmaster) Date: Sun, 26 Jan 2014 12:10:17 +0200 Subject: proxy_cache_methods OPTIONS; In-Reply-To: <1390730713.591089067.lz8yc0hv@frv34.ukr.net> References: <1390635644.224025557.yfo6sojq@frv34.ukr.net> <1390730713.591089067.lz8yc0hv@frv34.ukr.net> Message-ID: <1390731005.146319765.yfaewj81@frv34.ukr.net> --- Original message --- From: "wishmaster" Date: 26 January 2014, 12:07:24 > > > > --- Original message --- > From: "Jeroen Ooms" > Date: 26 January 2014, 05:27:46 > > > > > On Fri, Jan 24, 2014 at 11:42 PM, wishmaster wrote: > > > What is your proxy_cache_methods value? > > > > I tried both > > > > proxy_cache_methods OPTIONS; > > > Because "OPTIONS" means any of this values: GET, HEAD,POST, etc. The HTTP method like OPTIONS is absent. Oops, sorry, my mistake after hard night :). Try GET, HEAD or only From rvrv7575 at yahoo.com Sun Jan 26 12:55:18 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Sun, 26 Jan 2014 20:55:18 +0800 (SGT) Subject: Modifying the request body Message-ID: <1390740918.85637.YahooMailNeo@web193502.mail.sg3.yahoo.com> I have nginx deployed as a proxy server. The client sends a POST request which needs to be modified before forwarding it to the origin server. Is there any module / filter available in nginx. The best I could find was a patch by Maxim as outlined at?http://mailman.nginx.org/pipermail/nginx-devel/2013-March/003492.html. However, this appears to be incomplete. Please advise if there is anything that can be used to achieve the modification Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From manole at amazon.com Sun Jan 26 16:54:02 2014 From: manole at amazon.com (Manole, Sorin) Date: Sun, 26 Jan 2014 16:54:02 +0000 Subject: Nginx not starting with named pipe (fifo) for access_log In-Reply-To: <52C59AFC.3050404@trabia.net> References: <52C59AFC.3050404@trabia.net> Message-ID: Do you have a program set up to read from the other end of those pipes? -----Original Message----- From: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] On Behalf Of Sven Wiese Sent: 2 ianuarie 2014 19:00 To: nginx at nginx.org Subject: Nginx not starting with named pipe (fifo) for access_log Heya, there seems to be a issue with Nginx and named pipes (fifo). Tested nginx versions: - 1.1.19 (Ubuntu 12.04.3 LTS amd64) - 1.4.4 (Ubuntu 12.04.3 LTS amd64 with PPA https://launchpad.net/~nginx/+archive/stable ) - 1.4.4 (CentOS 6.5 amd64 with repo http://nginx.org/packages/centos/$releasever/$basearch/ ) Issue description: As soon as a named pipe is defined as access_log, nginx refuses to start. It just stales during the start and that's it. The only you can do is kill the process. The named pipe has been created with: mkfifo -m 0666 /var/log/test.log I have tested 2 versions of Nginx (1.1.19, 1.4.4) using Ubuntus repository. Different locations and different permissions of the named pipe have been tried, didn't help. Other programs work just fine with the named pipe, only Nginx seems to refuse it. Configuration: --snip-- http { [...] access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; access_log /var/log/test.log; [...] } --snap-- strace output: --snip-- [...] open("/var/log/nginx/access.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 5 fcntl(5, F_SETFD, FD_CLOEXEC) = 0 open("/var/log/nginx/error.log", O_WRONLY|O_CREAT|O_APPEND, 0644) = 6 fcntl(6, F_SETFD, FD_CLOEXEC) = 0 open("/var/log/test.log", O_WRONLY|O_CREAT|O_APPEND, 0644 [ CTRL+C ] --snap-- Did anyone else experience such behavior? I tried searching for it but couldn't find anything, only people seeming to successfully use named pipes (eg. in conjunction with syslog-ng). Cheers, Sven Amazon Development Center (Romania) S.R.L. registered office: 3E Palat Street, floor 2, Iasi, Iasi County, Iasi 700032, Romania. Registered in Romania. Registration number J22/2621/2005. From r1ch+nginx at teamliquid.net Sun Jan 26 20:00:21 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Sun, 26 Jan 2014 21:00:21 +0100 Subject: Nginx and cgit - upstream prematurely closed FastCGI stdout In-Reply-To: References: Message-ID: Hello, I recently had a lot of trouble similar to this, and discovered that the fastcgi_param directive is additive - eg a later declaration of SCRIPT_FILENAME simply adds a second SCRIPT_FILENAME to the fastcgi parameters. You most likely have SCRIPT_FILENAME set in your "include fastcgi_params" which means the second one later on is being ignored. On Sat, Jan 25, 2014 at 11:22 PM, Lars wrote: > I'm trying to setup cgit 0.10 with nginx 1.2.1-2.2 and fastcgi 1.0.3-3. > Unfortunately the reponse is a 502. The following message is written in the > error.log: > > [error] 30956#0: *1 upstream prematurely closed FastCGI stdout while reading > response header from upstream, client: **, server: **, request: "GET / > HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: > "**" > > > My nginx site is configured as follows: > > server { > ... > root /var/www/cgit/; > proxy_redirect off; > > location ~* ^.+\.(css|png|ico)$ { > expires 30d; > } > > location / { > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME /var/www/cgit; > fastcgi_pass unix:/var/run/fcgiwrap.socket; > fastcgi_param PATH_INFO $uri; > fastcgi_param QUERY_STRING $args; > } > } > > Does anybody have an idea, what is going wrong? I also tried to raise the > timeout limit, but I have no success. > > Thanks! > snafu > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From snafu at live.de Sun Jan 26 21:22:17 2014 From: snafu at live.de (Lars) Date: Sun, 26 Jan 2014 22:22:17 +0100 Subject: Nginx and cgit - upstream prematurely closed FastCGI stdout In-Reply-To: References: Message-ID: Hello Richard, thanks for your help. I tried it and a lot of other things without any success. But now it works. I just have install the latest fastcgi version from sources with the same setup as mentioned. Unfortunately without the advantages of the package management. Best regards, Lars On 26.01.2014 21:00, Richard Stanway wrote: > Hello, > I recently had a lot of trouble similar to this, and discovered that > the fastcgi_param directive is additive - eg a later declaration of > SCRIPT_FILENAME simply adds a second SCRIPT_FILENAME to the fastcgi > parameters. You most likely have SCRIPT_FILENAME set in your "include > fastcgi_params" which means the second one later on is being ignored. > > On Sat, Jan 25, 2014 at 11:22 PM, Lars wrote: >> I'm trying to setup cgit 0.10 with nginx 1.2.1-2.2 and fastcgi 1.0.3-3. >> Unfortunately the reponse is a 502. The following message is written in the >> error.log: >> >> [error] 30956#0: *1 upstream prematurely closed FastCGI stdout while reading >> response header from upstream, client: **, server: **, request: "GET / >> HTTP/1.1", upstream: "fastcgi://unix:/var/run/fcgiwrap.socket:", host: >> "**" >> >> >> My nginx site is configured as follows: >> >> server { >> ... >> root /var/www/cgit/; >> proxy_redirect off; >> >> location ~* ^.+\.(css|png|ico)$ { >> expires 30d; >> } >> >> location / { >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME /var/www/cgit; >> fastcgi_pass unix:/var/run/fcgiwrap.socket; >> fastcgi_param PATH_INFO $uri; >> fastcgi_param QUERY_STRING $args; >> } >> } >> >> Does anybody have an idea, what is going wrong? I also tried to raise the >> timeout limit, but I have no success. >> >> Thanks! >> snafu >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jan 27 00:11:07 2014 From: nginx-forum at nginx.us (mevans336) Date: Sun, 26 Jan 2014 19:11:07 -0500 Subject: Nginx Tweaks for JBoss Message-ID: Hello Gurus, It's been several years since I've revisited anything but the most basic changes to our Nginx reverse-proxy front-end. I'm wondering if there have been any new tweaks or security related configuration changes that should be implemented on Nginx when acting as a reverse-proxy for JBoss? We use SPDY and of course tweak SSL to stay up-to-date, but otherwise our configs have remained static. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246840,246840#msg-246840 From weiyue at taobao.com Mon Jan 27 02:46:36 2014 From: weiyue at taobao.com (=?gb2312?B?zsDUvQ==?=) Date: Mon, 27 Jan 2014 10:46:36 +0800 Subject: =?UTF-8?Q?=E7=AD=94=E5=A4=8D=3A_Implementing_CONNECT_in_nginx?= In-Reply-To: <20140122204800.GC3978@glanzmann.de> References: <20140122151440.GB28425@glanzmann.de> <20140122204800.GC3978@glanzmann.de> Message-ID: <005701cf1b09$ff1c3cd0$fd54b670$@com> I think this meets your requirement https://github.com/alibaba/tengine/pull/335/files -----????----- ???: nginx-bounces at nginx.org [mailto:nginx-bounces at nginx.org] ?? Thomas Glanzmann ????: 2014?1?23? 4:48 ???: nginx at nginx.org ??: openconnect-devel at lists.infradead.org ??: Re: Implementing CONNECT in nginx Hello, * Thomas Glanzmann [2014-01-22 16:15]: > I would like to extend nginx with a CONNECT statement which connects to > a TCP socket. Could someone walk me through which source files I need to > modify and which fucntions I should have a look at? to answer my own question. The websocket implementation. Diff between 1.3.12 and 1.3.13 comes very close to what I'm looking for. Cheers, Thomas _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From someukdeveloper at gmail.com Mon Jan 27 02:49:25 2014 From: someukdeveloper at gmail.com (Some Developer) Date: Mon, 27 Jan 2014 02:49:25 +0000 Subject: X-Frame-Options: Nginx includes header twice In-Reply-To: <1390635840.744859415.wjikci0j@frv34.ukr.net> References: <52E337A7.7000605@googlemail.com> <1390635840.744859415.wjikci0j@frv34.ukr.net> Message-ID: <52E5C935.1080006@googlemail.com> On 25/01/2014 07:51, wishmaster wrote: > --- Original message --- > From: "Some Developer" > Date: 25 January 2014, 06:04:10 > >> I'm running Nginx 1.4.4 on Ubuntu 12.04 and have added the X-Frame-Options header for one of my sites but in testing it appears that Nginx includes this itself in addition to user configured headers. Basically I want X-Frame-Options to be DENY but when I set that header Nginx also sends an X-Frame-Options SAMEORIGIN header so that there are two X-Frame-Options headers in every request. >> >> Is there some way to disable the extra header? I can't find anything in my configuration that would add the second header. > May by this is the header, has been set by your php-application? > You can remove this with help of module http://wiki.nginx.org/HttpHeadersMoreModule > I don't actually use PHP but your response lead me to an answer. Apparently Django sets some headers so it looks like I need to disable it there. Thanks! Seems a bit strange to me that an application framework sets HTTP headers. Surely this should be left to the HTTP server? What are other peoples opinions on this? From makailol7 at gmail.com Mon Jan 27 04:30:21 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Mon, 27 Jan 2014 10:00:21 +0530 Subject: Issue with multipart response compression. In-Reply-To: References: Message-ID: Hello Could someone check this and help me for compressing the multipart response with dynamic boundary? Thanks, Makailol On Tue, Jan 21, 2014 at 11:28 AM, Makailol Charls wrote: > Hello > > I use Nginx/1.4.4 as a reverse proxy and my backend webserver generates > multipart response with some dynamic boundary. > > I use nginx gzip module to send compress data to the client but it is > unable to compress this multipart response which contains dynamic boundary > in content_type. > > If I use gzip_type as below, it doesn't work. > gzip_types 'multipart/mixed'; > > If I include boundary in gzip_type, it works fine but boundary is dynamic > in my case. > gzip_types 'multipart/mixed; boundary="Ajm,e3pN"' ; > > Can someone suggest solution for this? > > Thanks, > Makailol > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Mon Jan 27 05:50:08 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Mon, 27 Jan 2014 09:50:08 +0400 Subject: Issue with multipart response compression. In-Reply-To: References: Message-ID: <5BB73CC1-BDF2-4BF0-94DA-D29DCF5B0C78@sysoev.ru> On Jan 21, 2014, at 9:58 , Makailol Charls wrote: > Hello > > I use Nginx/1.4.4 as a reverse proxy and my backend webserver generates multipart response with some dynamic boundary. > > I use nginx gzip module to send compress data to the client but it is unable to compress this multipart response which contains dynamic boundary in content_type. > > If I use gzip_type as below, it doesn't work. > gzip_types 'multipart/mixed'; > > If I include boundary in gzip_type, it works fine but boundary is dynamic in my case. > gzip_types 'multipart/mixed; boundary="Ajm,e3pN"' ; > > Can someone suggest solution for this? If you can limit these responses in location then location /uri { gzip_types *; .. } -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Mon Jan 27 09:50:14 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 27 Jan 2014 09:50:14 +0000 Subject: X-Frame-Options: Nginx includes header twice In-Reply-To: <52E5C935.1080006@googlemail.com> References: <52E337A7.7000605@googlemail.com> <1390635840.744859415.wjikci0j@frv34.ukr.net> <52E5C935.1080006@googlemail.com> Message-ID: On 27 January 2014 02:49, Some Developer wrote: > Seems a bit strange to me that an application framework sets HTTP headers. > Surely this should be left to the HTTP server? What are other peoples > opinions on this? There are many instances where the application is the most knowledgable layer regarding which HTTP headers to send: think caching; think keep-alive. In general, the absolute /least/ you can do in the reverse-proxy layer, the better. IMHO. J From nginx-forum at nginx.us Mon Jan 27 10:58:58 2014 From: nginx-forum at nginx.us (humank) Date: Mon, 27 Jan 2014 05:58:58 -0500 Subject: How can i compile nginx with specific lib linkage In-Reply-To: <20140124154842.GP1835@mdounin.ru> References: <20140124154842.GP1835@mdounin.ru> Message-ID: <73cc6b8fec63ad3e326a5e99eb3f7614.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Jan 23, 2014 at 11:35:44PM -0500, humank wrote: > > > I have customized a new nginx add on module,and the module use an > external > > lib ( mytest.so). > > The method i called in mytest.so is kimtest method. > > How can i configure the ./configure file and Makefile to prevent the > error " > > undefined reference to `kimtest' " > > > > Here is the configure file.. > > > > auto/configure --with-debug --with-ld-opt="-L/usr/local/lib64" > > --add-module=./src/my_nginx_module > > > > while configuring .. everything seems ok, no error occurs. > > but while making ... " undefined reference to `kimtest' " > > Something like > > CORE_LIBS="$CORE_LIBS -lmytest" > > in your module config file should do the trick. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi Maxim Dounin, i solved the problem as you provided solution. But i face another problem, that is while starting the nginx server, still lack of the library at runtime. I should set the LIBRARY_PATH in order to start the nginx correctly. ( export LD_LIBRARY_PATH) Is there any suggestion or setting to set the install config ? BRs, Kim Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246770,246858#msg-246858 From mdounin at mdounin.ru Mon Jan 27 12:36:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Jan 2014 16:36:46 +0400 Subject: Modifying the request body In-Reply-To: <1390740918.85637.YahooMailNeo@web193502.mail.sg3.yahoo.com> References: <1390740918.85637.YahooMailNeo@web193502.mail.sg3.yahoo.com> Message-ID: <20140127123645.GR1835@mdounin.ru> Hello! On Sun, Jan 26, 2014 at 08:55:18PM +0800, Rv Rv wrote: > I have nginx deployed as a proxy server. The client sends a POST > request which needs to be modified before forwarding it to the > origin server. Is there any module / filter available in nginx. > The best I could find was a patch by Maxim as outlined > at?http://mailman.nginx.org/pipermail/nginx-devel/2013-March/003492.html. > However, this appears to be incomplete. Please advise if there > is anything that can be used to achieve the modification > Thanks You may want to take a look at the proxy_set_body directive, see http://nginx.org/r/proxy_set_body. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jan 27 12:53:21 2014 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 27 Jan 2014 07:53:21 -0500 Subject: [OpenResty] How to start Nginx? In-Reply-To: References: Message-ID: <42e61f15b28ae1d2d72321da02f5daa9.NginxMailingListEnglish@forum.nginx.org> Thanks Yichun. This is what I ended up doing, because "make install DESTDIR=blah" triggered other errors. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246792,246860#msg-246860 From nginx-forum at nginx.us Mon Jan 27 12:54:28 2014 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 27 Jan 2014 07:54:28 -0500 Subject: [OpenResty] How to start Nginx? In-Reply-To: <7440330.k7X8TbizWA@tornado> References: <7440330.k7X8TbizWA@tornado> Message-ID: <0ddf011baa3d3f55756e74ff0ce8d790.NginxMailingListEnglish@forum.nginx.org> Thanks for the idea, but I had to compile Nginx because the one available in the depot didn't have Lua compiled, and I'd rather compile both Nginx and Lua at the same time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246792,246861#msg-246861 From nginx-forum at nginx.us Mon Jan 27 12:55:07 2014 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 27 Jan 2014 07:55:07 -0500 Subject: [OpenResty] How to start Nginx? In-Reply-To: <52E28C25.9050000@ohlste.in> References: <52E28C25.9050000@ohlste.in> Message-ID: <477705864ef17cb50a3a277b8ded6a2e.NginxMailingListEnglish@forum.nginx.org> Thanks, but this is for an appliance, so a virtual machine is too big. Recompiling with "--prefix" solved the problem. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246792,246862#msg-246862 From nginx-forum at nginx.us Mon Jan 27 13:14:03 2014 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 27 Jan 2014 08:14:03 -0500 Subject: [Lua] "Hello, world!" from Lua file? Message-ID: <08ddcd9810597a7f1b3a7565ab651565.NginxMailingListEnglish@forum.nginx.org> Hello Now that I have a working Nginx with the ngx_lua module, I'd like to start learning how to write web scripts. The following page doesn't have a basic sample: http://wiki.nginx.org/HttpLuaModule So I used the following... http://yichunzhang.wordpress.com/2010/05/18/a-simple-ngx_lua-example-for-the-future/ ... to edit nginx.conf and write a basic hello.lua file: ================ nginx.conf ... server { listen 12345; server_name localhost; location / { root html; index index.html index.htm; content_by_lua_file hello.lua; } ... ================ html/hello.lua print("Hello, world!") ================ But when I call http://192.168.0.10:12345/hello.lua, Chrome downloads the file instead of displaying the output. What should I do? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246864,246864#msg-246864 From jaderhs5 at gmail.com Mon Jan 27 14:00:54 2014 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Mon, 27 Jan 2014 12:00:54 -0200 Subject: [Lua] "Hello, world!" from Lua file? In-Reply-To: <08ddcd9810597a7f1b3a7565ab651565.NginxMailingListEnglish@forum.nginx.org> References: <08ddcd9810597a7f1b3a7565ab651565.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello. You need to set the content-type to this location. e.g.: add_header Content-Type text/plain; 2014-01-27 Shohreh > Hello > > Now that I have a working Nginx with the ngx_lua module, I'd like to start > learning how to write web scripts. > > The following page doesn't have a basic sample: > http://wiki.nginx.org/HttpLuaModule > > So I used the following... > > http://yichunzhang.wordpress.com/2010/05/18/a-simple-ngx_lua-example-for-the-future/ > ... to edit nginx.conf and write a basic hello.lua file: > > ================ nginx.conf > ... > server { > listen 12345; > server_name localhost; > > location / { > root html; > index index.html index.htm; > content_by_lua_file hello.lua; > } > ... > ================ html/hello.lua > print("Hello, world!") > ================ > > But when I call http://192.168.0.10:12345/hello.lua, Chrome downloads the > file instead of displaying the output. > > What should I do? > > Thank you. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246864,246864#msg-246864 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- att. Jader H. Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 27 15:19:36 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 27 Jan 2014 19:19:36 +0400 Subject: proxy_cache_methods OPTIONS; In-Reply-To: References: Message-ID: <20140127151936.GW1835@mdounin.ru> Hello! On Fri, Jan 24, 2014 at 03:33:39PM -0800, Jeroen Ooms wrote: > Is it possible to cache the OPTIONS method? This pages gives exactly > that example: http://www.packtpub.com/article/nginx-proxy > > proxy_cache_methods OPTIONS; > > However, when I try this, nginx writes in the error log: > > [warn] 7243#0: invalid value "OPTIONS" in ... As of now, only GET, HEAD and POST methods can be used in proxy_cache_methods. Allowed values are listed in syntax at http://nginx.org/r/proxy_cache_methods. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jan 27 15:28:02 2014 From: nginx-forum at nginx.us (Shohreh) Date: Mon, 27 Jan 2014 10:28:02 -0500 Subject: [Lua] "Hello, world!" from Lua file? In-Reply-To: References: Message-ID: <43c214738fae4ccd75b657287c0d551e.NginxMailingListEnglish@forum.nginx.org> Thanks for the tip. After editing nginx.conf thusly... ============== ... location / { root html; index index.html index.htm; content_by_lua_file html/hello.lua; add_header Content-Type text/plain; } ... ============== ... the script simply displays an empty page, with no error in logs/error.log. To avoid bothering you guys with newbie questions, is there a tutorial to get started with ngx_lua? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246864,246870#msg-246870 From jaderhs5 at gmail.com Mon Jan 27 16:49:05 2014 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Mon, 27 Jan 2014 14:49:05 -0200 Subject: [Lua] "Hello, world!" from Lua file? In-Reply-To: <43c214738fae4ccd75b657287c0d551e.NginxMailingListEnglish@forum.nginx.org> References: <43c214738fae4ccd75b657287c0d551e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello. You must use ngx.print instead of print to output into the response body. 2014-01-27 Shohreh > Thanks for the tip. > > After editing nginx.conf thusly... > ============== > ... > location / { > root html; > index index.html index.htm; > content_by_lua_file html/hello.lua; > add_header Content-Type text/plain; > } > ... > ============== > > ... the script simply displays an empty page, with no error in > logs/error.log. > > To avoid bothering you guys with newbie questions, is there a tutorial to > get started with ngx_lua? > > Thank you. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246864,246870#msg-246870 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- att. Jader H. Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: From matthias.richter at identpro.de Mon Jan 27 17:24:40 2014 From: matthias.richter at identpro.de (Matthias Richter) Date: Mon, 27 Jan 2014 17:24:40 +0000 Subject: nginx seems to proxy only http GET Message-ID: <52E69654.8070904@identpro.de> Hi, I'm using nginx to offload https to http. That works well but in my backend only GET Requests seem to get through. For all POST, PUT and DELETE I only receive a Status 400 with conten "HTTP method POST is not supported by this URL". Has anybody experienced this before? Thanks Matthias From contact at jpluscplusm.com Mon Jan 27 17:36:44 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Mon, 27 Jan 2014 17:36:44 +0000 Subject: nginx seems to proxy only http GET In-Reply-To: <52E69654.8070904@identpro.de> References: <52E69654.8070904@identpro.de> Message-ID: On 27 January 2014 17:24, Matthias Richter wrote: > Hi, > > I'm using nginx to offload https to http. That works well but in my > backend only GET Requests seem to get through. For all POST, PUT and > DELETE I only receive a Status 400 with conten "HTTP method POST is not > supported by this URL". That's your backend telling you that, not nginx. J From nginx-forum at nginx.us Mon Jan 27 17:51:01 2014 From: nginx-forum at nginx.us (brih) Date: Mon, 27 Jan 2014 12:51:01 -0500 Subject: Applying NGINX location based on unlisted extension of default file? Message-ID: <0b9e34ad13d15a23744b8ac7ff088b20.NginxMailingListEnglish@forum.nginx.org> I'm trying to create a second location in NGINX that will only fire for a specific file type. Specifically, I have NGINX acting as a proxy for a server that primarily serves PHP files. There are, however, a bunch of folders that also have ASPX files (more than 120), and I need to use a different configuration when serving them (different caching rules, different Modsecurity/NAXSI rules, etc). NGINX is successfully detecting the file type and applying the alternate location when the file name is specifically listed, but it's breaking when the ASPX file is the default file in the folder and the URL simply ends in a slash. When that happens, it's just applying the root location configuration. Is there a way to detect the extension of an index file and apply an alternate location, even when the name of the index file isn't specifically entered? server { listen 80; server_name mysite.com; location / { #general settings applicable to most files go here proxy_pass http://@backend; } location ~* \.(aspx|asmx) { #slightly different settings applicable to .Net files go here proxy_pass http://@backend; } } If a folder has an index file called "default.aspx", the above configuration works perfectly if I enter the url as mysite.com/folder/default.aspx, but it fails to apply the second location and applies the base location if I enter it as mysite.com/folder, even though it is serving the exact same default.aspx file. The only solution I've found is to alter the location directive to identify by the folder name instead of the file extension, but this doesn't scale well as there are more than 120 affected folders on the server and I'd end up with a huge conf file. Is there any way to specify a location by file extension, when the file isn't specifically named in the URL? Can I test a folders index file to determine its extension before a location is applied? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246883,246883#msg-246883 From matthias.richter at identpro.de Mon Jan 27 19:07:02 2014 From: matthias.richter at identpro.de (Matthias Richter) Date: Mon, 27 Jan 2014 19:07:02 +0000 Subject: AW: nginx seems to proxy only http GET In-Reply-To: References: <52E69654.8070904@identpro.de>, Message-ID: <1E7893C57D4D1F40B1288765EAC2747B33A2E229@SBS-IDENTPRO.intranet.identpro.net> that is strange because I cannot see any requests in the backend log file. curl requests from command line pop up there though. so curl -X(DELETE|PUT|POST) localhost:8080/api does show requests in the logs, nginx passing them through does not? Thanks, Matthias ________________________________________ Von: nginx-bounces at nginx.org [nginx-bounces at nginx.org]" im Auftrag von "Jonathan Matthews [contact at jpluscplusm.com] Gesendet: Montag, 27. Januar 2014 18:36 Bis: nginx at nginx.org Betreff: Re: nginx seems to proxy only http GET On 27 January 2014 17:24, Matthias Richter wrote: > Hi, > > I'm using nginx to offload https to http. That works well but in my > backend only GET Requests seem to get through. For all POST, PUT and > DELETE I only receive a Status 400 with conten "HTTP method POST is not > supported by this URL". That's your backend telling you that, not nginx. J _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From agentzh at gmail.com Mon Jan 27 20:34:43 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Mon, 27 Jan 2014 12:34:43 -0800 Subject: [Lua] "Hello, world!" from Lua file? In-Reply-To: References: <08ddcd9810597a7f1b3a7565ab651565.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Mon, Jan 27, 2014 at 6:00 AM, Jader H. Silva wrote: > > You need to set the content-type to this location. e.g.: > > add_header Content-Type text/plain; > Alternatively one can set the Content-Type response header directly in Lua (which is more flexible): content_by_lua ' ngx.header["Content-Type"] = "text/plain" ngx.say("hello world") '; or use the default_type directive: default_type text/plain; content_by_lua ' ngx.say("hello world") '; Regards, -agentzh From makailol7 at gmail.com Tue Jan 28 06:56:04 2014 From: makailol7 at gmail.com (Makailol Charls) Date: Tue, 28 Jan 2014 12:26:04 +0530 Subject: Issue with multipart response compression. In-Reply-To: <5BB73CC1-BDF2-4BF0-94DA-D29DCF5B0C78@sysoev.ru> References: <5BB73CC1-BDF2-4BF0-94DA-D29DCF5B0C78@sysoev.ru> Message-ID: Thanks for your suggestion . It is possible for me to implement. Regards, Makailol On Mon, Jan 27, 2014 at 11:20 AM, Igor Sysoev wrote: > On Jan 21, 2014, at 9:58 , Makailol Charls wrote: > > Hello > > I use Nginx/1.4.4 as a reverse proxy and my backend webserver generates > multipart response with some dynamic boundary. > > I use nginx gzip module to send compress data to the client but it is > unable to compress this multipart response which contains dynamic boundary > in content_type. > > If I use gzip_type as below, it doesn't work. > gzip_types 'multipart/mixed'; > > If I include boundary in gzip_type, it works fine but boundary is dynamic > in my case. > gzip_types 'multipart/mixed; boundary="Ajm,e3pN"' ; > > Can someone suggest solution for this? > > > If you can limit these responses in location then > > location /uri { > gzip_types *; > .. > } > > > -- > Igor Sysoev > http://nginx.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas at glanzmann.de Tue Jan 28 08:54:56 2014 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Tue, 28 Jan 2014 09:54:56 +0100 Subject: Implementing CONNECT in nginx In-Reply-To: <20140122204800.GC3978@glanzmann.de> References: <20140122151440.GB28425@glanzmann.de> <20140122204800.GC3978@glanzmann.de> Message-ID: <20140128085456.GA18291@glanzmann.de> Hello Nickos, antoher way would be the SNI to distinguish. It would be nice to have SNI proxy support in NGINX. However there is a third party proxy which probably already does the job: https://github.com/dlundquist/sniproxy I'll test it later after I bisected the problem with anyconnect. Cheers, Thomas From matthias.richter at identpro.de Tue Jan 28 09:08:30 2014 From: matthias.richter at identpro.de (Matthias Richter) Date: Tue, 28 Jan 2014 09:08:30 +0000 Subject: AW: nginx seems to proxy only http GET In-Reply-To: <1E7893C57D4D1F40B1288765EAC2747B33A2E229@SBS-IDENTPRO.intranet.identpro.net> References: <52E69654.8070904@identpro.de>, <1E7893C57D4D1F40B1288765EAC2747B33A2E229@SBS-IDENTPRO.intranet.identpro.net> Message-ID: <52E77388.2090404@identpro.de> I was able to sort it out. My config was missing proxy_http_version 1.1; . On 27.01.2014 20:07, Matthias Richter wrote: > that is strange because I cannot see any requests in the backend log file. curl requests from command line pop up there though. > > so curl -X(DELETE|PUT|POST) localhost:8080/api does show requests in the logs, > nginx passing them through does not? > > Thanks, > > Matthias > > > ________________________________________ > Von: nginx-bounces at nginx.org [nginx-bounces at nginx.org]" im Auftrag von "Jonathan Matthews [contact at jpluscplusm.com] > Gesendet: Montag, 27. Januar 2014 18:36 > Bis: nginx at nginx.org > Betreff: Re: nginx seems to proxy only http GET > > On 27 January 2014 17:24, Matthias Richter wrote: >> Hi, >> >> I'm using nginx to offload https to http. That works well but in my >> backend only GET Requests seem to get through. For all POST, PUT and >> DELETE I only receive a Status 400 with conten "HTTP method POST is not >> supported by this URL". > That's your backend telling you that, not nginx. > > J > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > From someukdeveloper at gmail.com Tue Jan 28 11:04:05 2014 From: someukdeveloper at gmail.com (Some Developer) Date: Tue, 28 Jan 2014 11:04:05 +0000 Subject: X-Frame-Options: Nginx includes header twice In-Reply-To: References: <52E337A7.7000605@googlemail.com> <1390635840.744859415.wjikci0j@frv34.ukr.net> <52E5C935.1080006@googlemail.com> Message-ID: <52E78EA5.9050601@googlemail.com> On 27/01/2014 09:50, Jonathan Matthews wrote: > On 27 January 2014 02:49, Some Developer wrote: >> Seems a bit strange to me that an application framework sets HTTP headers. >> Surely this should be left to the HTTP server? What are other peoples >> opinions on this? > There are many instances where the application is the most > knowledgable layer regarding which HTTP headers to send: think > caching; think keep-alive. In general, the absolute /least/ you can do > in the reverse-proxy layer, the better. IMHO. > > J > Fair enough. It would be somewhat easier to manage if all headers were implemented in one place or the other. If I could set arbitary headers in Django then I could do it all there but at the moment I have some headers set in my Nginx configuration and some headers that appear to be set in Django and that just makes it confusing. From mdounin at mdounin.ru Tue Jan 28 11:16:53 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 28 Jan 2014 15:16:53 +0400 Subject: AW: nginx seems to proxy only http GET In-Reply-To: <52E77388.2090404@identpro.de> References: <52E69654.8070904@identpro.de> <1E7893C57D4D1F40B1288765EAC2747B33A2E229@SBS-IDENTPRO.intranet.identpro.net> <52E77388.2090404@identpro.de> Message-ID: <20140128111653.GX1835@mdounin.ru> Hello! On Tue, Jan 28, 2014 at 09:08:30AM +0000, Matthias Richter wrote: > I was able to sort it out. My config was missing proxy_http_version 1.1; . The HTTP/1.1 isn't required for nginx to proxy POST and other methods. On the other hand, it's likely that your backend requires HTTP/1.1 to handle non-GET requests for some reason, and with "proxy_http_version 1.1" your backend is now happy. -- Maxim Dounin http://nginx.org/ From contact at jpluscplusm.com Tue Jan 28 11:42:53 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 28 Jan 2014 11:42:53 +0000 Subject: X-Frame-Options: Nginx includes header twice In-Reply-To: <52E78EA5.9050601@googlemail.com> References: <52E337A7.7000605@googlemail.com> <1390635840.744859415.wjikci0j@frv34.ukr.net> <52E5C935.1080006@googlemail.com> <52E78EA5.9050601@googlemail.com> Message-ID: On 28 January 2014 11:04, Some Developer wrote: > If I could set arbitary headers in > Django then I could do it all there I know nothing about Django, but this would seem to be what you're asking for: https://docs.djangoproject.com/en/1.6/ref/request-response/#setting-header-fields J From nginx-forum at nginx.us Tue Jan 28 11:51:58 2014 From: nginx-forum at nginx.us (Shohreh) Date: Tue, 28 Jan 2014 06:51:58 -0500 Subject: [Lua] "Hello, world!" from Lua file? In-Reply-To: References: Message-ID: Yichun Zhang (agentzh) Wrote: ------------------------------------------------------- > Alternatively one can set the Content-Type response header directly in Lua (which is more flexible): > > content_by_lua ' > ngx.header["Content-Type"] = "text/plain" > ngx.say("hello world") > '; Thanks for the tip, it works. Is there a tutorial besides the following page about how to write Lua scripts through ngx_lua? http://wiki.nginx.org/HttpLuaModule One other thing: I'd like / to map to index.html which contains a form, and the Lua script should be called as the action to handle the form. How can I do this? This calls the script any time I hit the root directory: location / { root html; index index.html index.htm; content_by_lua_file html/hello.lua; #add_header Content-Type text/plain; } Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246864,246907#msg-246907 From maxmilhas at yandex.com Tue Jan 28 12:24:30 2014 From: maxmilhas at yandex.com (Maxmilhas) Date: Tue, 28 Jan 2014 10:24:30 -0200 Subject: Configuration not working Message-ID: <52E7A17E.3000305@yandex.com> Hello, I am running Nginx 1.3.0 on CentOS. It serves several domains. Yesterday we tried to change the allowed URIs to access one folder specific to one domain. After the config file change we tried the "nginx -s reload", without apparent success or errors. After this I rebooted the server, but the restrictions are still not effective. I still can access "www.secret.com/_abc/def/ghi/" contents from anywhere. *nginx -V* output: nginx version: nginx/1.3.0 TLS SNI support enabled configure arguments: --prefix=/usr/share --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --user=nginx --group=nginx --with-ipv6 --with-file-aio --with-http_ssl_module --with-http_realip_module --with-http_sub_module --with-http_dav_module --with-http_gzip_static_module --with-http_stub_status_module The complete */etc/nginx/nginx.conf* file: worker_processes 1; events { worker_connections 1024;} http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; gzip on; gzip_min_length 1000; # bytes gzip_proxied expired no-cache no-store private auth; gzip_types text/plain application/xml text/css text/javascript application/json text/xml application/javascript; server_tokens off; server { # ?? mainly for domain www.blabla.com, usefull for others location /client/data/up/ { expires 720h; } # all domains location /css/ { expires 720h; } location /img/ { expires 720h; } location /js/ { expires 720h; } location /lib/ { expires 720h; } location /piwik/ { expires 720h; } } server { server_name www.secret.com; #location /_abc/def/ { ... } location /_abc/def/ghi/ { allow 123.456.654.321; deny all; } } include /etc/nginx/conf.d/*.conf; } #vim: sw=4: sts=4: ts=8 I quickly read the error logs, didn't see anything meaningful. What may be wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jan 28 12:33:41 2014 From: nginx-forum at nginx.us (JackB) Date: Tue, 28 Jan 2014 07:33:41 -0500 Subject: It's believed that SPDY is a huge DDoS vector by itself Message-ID: The subject is a quote of Maxim Dounin in a discussion found here: http://forum.nginx.org/read.php?29,246885,246902#msg-246902 It would be nice to have a detailed list of SPDY functionality that could be used as a DDoS vector. And it would be even better, to have an nginx configuration example to workaround each problem without simply disabling features. Last, should there be a default configuration in nginx/spdy which prevents the abuse for DDoS attacks? Any thoughts? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246911,246911#msg-246911 From agentzh at gmail.com Tue Jan 28 20:18:30 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 28 Jan 2014 12:18:30 -0800 Subject: [Lua] "Hello, world!" from Lua file? In-Reply-To: References: Message-ID: Hello! On Tue, Jan 28, 2014 at 3:51 AM, Shohreh wrote: > > Thanks for the tip, it works. Is there a tutorial besides the following page > about how to write Lua scripts through ngx_lua? > http://wiki.nginx.org/HttpLuaModule > You can find a lot of resources on the openresty.org website: http://openresty.org > One other thing: I'd like / to map to index.html which contains a form, and > the Lua script should be called as the action to handle the form. How can I > do this? > There're various ways to do this. The simplest way is to define a dedicated location (say, location = /post) for the POST/GET target of your HTML form. Regards, -agentzh From mat999 at gmail.com Tue Jan 28 20:59:27 2014 From: mat999 at gmail.com (SplitIce) Date: Wed, 29 Jan 2014 07:29:27 +1030 Subject: It's believed that SPDY is a huge DDoS vector by itself In-Reply-To: References: Message-ID: I would like to second this. On Tue, Jan 28, 2014 at 11:03 PM, JackB wrote: > The subject is a quote of Maxim Dounin in a discussion found here: > http://forum.nginx.org/read.php?29,246885,246902#msg-246902 > > It would be nice to have a detailed list of SPDY functionality that could > be > used as a DDoS vector. And it would be even better, to have an nginx > configuration example to workaround each problem without simply disabling > features. > > Last, should there be a default configuration in nginx/spdy which prevents > the abuse for DDoS attacks? > > Any thoughts? > > Thanks. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,246911,246911#msg-246911 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Tue Jan 28 23:01:30 2014 From: lists at ruby-forum.com (Anth Anth) Date: Wed, 29 Jan 2014 00:01:30 +0100 Subject: Launching Excel in a production web server on a Mac Message-ID: I have written some code that will open Excel, edit a spreadsheet, then save and print the sheet. This code needs to run on a production web server in a Rails app. When I run it using Webrick (running under my user) everything is great. When I try it on the production server (nginx running as root), not so great. I knew heading into this that this scenario was a longshot to work. My question is is this even remotely possible? If not, does anyone have any alternate suggestions? -- Posted via http://www.ruby-forum.com/. From david at styleflare.com Tue Jan 28 23:07:12 2014 From: david at styleflare.com (david) Date: Tue, 28 Jan 2014 18:07:12 -0500 Subject: PEM_read_bio_X509 - Installing Cert. Message-ID: <52E83820.5020309@styleflare.com> I am running into a strange problem. If I cat domain.crt >> bundle.crt ssl_certificate bundle.crt; ssl_certificate_key domain.key; and I try to start nginx I get the following error message. nginx: [emerg] PEM_read_bio_X509("bundle.crt") failed (SSL:) If I dont use the bundle I just use the domain.crt with adding it to the bundle ssl_certificate domain.crt; ssl_certificate_key domain.key; nginx starts without trouble. Curious what I am doing wrong. Thanks. From contact at jpluscplusm.com Tue Jan 28 23:08:44 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 28 Jan 2014 23:08:44 +0000 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: To say that you've not given us sufficient information to help you debug your problem would perhaps be biggest understatement I've yet seen in 2014. Seriously, chap - "I can't open my spreadsheet in nginx" ... are you /really/ expecting anyone to be able to help from that minimal (if not totally nonsensical) problem description? Read this. End to end. And then try again: http://www.catb.org/~esr/faqs/smart-questions.html From david at styleflare.com Tue Jan 28 23:12:33 2014 From: david at styleflare.com (david) Date: Tue, 28 Jan 2014 18:12:33 -0500 Subject: PEM_read_bio_X509 - Installing Cert. In-Reply-To: <52E83820.5020309@styleflare.com> References: <52E83820.5020309@styleflare.com> Message-ID: <52E83961.6010003@styleflare.com> I forgot to mention that when I try to access the site via a browser if I just to the domain.crt then it pops up a dialog because its just an unsigned cert. Thanks in advance for any pointers. On 1/28/14 6:07 PM, david wrote: > I am running into a strange problem. > > If I cat domain.crt >> bundle.crt > > ssl_certificate bundle.crt; > ssl_certificate_key domain.key; > > and I try to start nginx I get the following error message. > > nginx: [emerg] PEM_read_bio_X509("bundle.crt") failed (SSL:) > > If I dont use the bundle I just use the domain.crt with adding it to > the bundle > > ssl_certificate domain.crt; > ssl_certificate_key domain.key; > > nginx starts without trouble. > > Curious what I am doing wrong. > > Thanks. > > > > From lists at ruby-forum.com Tue Jan 28 23:18:37 2014 From: lists at ruby-forum.com (Anth Anth) Date: Wed, 29 Jan 2014 00:18:37 +0100 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: <27b6c5af831c1cebe314b79d9070e82b@ruby-forum.com> Fair enough, but I was looking for more of a high level guidance rather than solving my problem specifically. I guess my question is in general is it possible for Rails code running in nginx (running as root) to spawn an application like Excel? I'm not familiar enough with OS X's userspace model to even know where to start with that... The specifics of my problem: 0) nginx 1.2.2 running as root on OS X 10.8.5. Rails is at 3.2.15 1) A standard user request comes in 2) Handled by Rails controller 3) Controller launches Excel through rb-appscript (ruby-to-applescript bridge) Normally at this point Excel actually launches in my dev environment, and specific changes to an Excel document are made through rb-appscript code. On the production side the Excel process never starts, and the request eventually just times out... nothing in the logs. -- Posted via http://www.ruby-forum.com/. From david at styleflare.com Tue Jan 28 23:19:38 2014 From: david at styleflare.com (david) Date: Tue, 28 Jan 2014 18:19:38 -0500 Subject: PEM_read_bio_X509 - Installing Cert. In-Reply-To: <52E83820.5020309@styleflare.com> References: <52E83820.5020309@styleflare.com> Message-ID: <52E83B0A.9090305@styleflare.com> Fixed it. Whitespace - Arrgh. Thanks sorry for the noise. On 1/28/14 6:07 PM, david wrote: > I am running into a strange problem. > > If I cat domain.crt >> bundle.crt > > ssl_certificate bundle.crt; > ssl_certificate_key domain.key; > > and I try to start nginx I get the following error message. > > nginx: [emerg] PEM_read_bio_X509("bundle.crt") failed (SSL:) > > If I dont use the bundle I just use the domain.crt with adding it to > the bundle > > ssl_certificate domain.crt; > ssl_certificate_key domain.key; > > nginx starts without trouble. > > Curious what I am doing wrong. > > Thanks. > > > > From scott_ribe at elevated-dev.com Tue Jan 28 23:21:50 2014 From: scott_ribe at elevated-dev.com (Scott Ribe) Date: Tue, 28 Jan 2014 16:21:50 -0700 Subject: Launching Excel in a production web server on a Mac In-Reply-To: References: Message-ID: <86882F0F-3107-4F96-90B6-871BE5CB3B3F@elevated-dev.com> On Jan 28, 2014, at 4:01 PM, Anth Anth wrote: > I knew heading into this that this scenario was a longshot to work. My > question is is this even remotely possible? If not, does anyone have > any alternate suggestions? I don't see any reason why it wouldn't work, but you have to get your users & permissions straight. - Are you really running nginx as root? That's an odd, and insecure, thing to do. Remember, just because the process that launches it may have root privileges, does not necessarily mean the effective user is root. - Your server will have to be logged into a user account. Unlike nginx and other servers, Xcel requires an active logged-in user to run. So basically, you want a user, you want to have the server auto-login to that user account on boot, and you want nginx to run under that user's account as well. That's the basics. Now, to make this actually reliable on a server is another issue. You see, I do the same thing with MS Word on a Mac server, and here's what I have to cope with, and I assume it will be similar for Xcel: - Of course if preferences are set to display the document gallery at startup, it will not open the document you tell it to. So you must set up the preferences to disable the document gallery. - It will occasionally spontaneously reset that preference, so you're really best off setting up the preferences the way you want them, copying the file into your project somewhere, and replacing the application preferences with your cached copy before each launch. - Every once in a while, launching the application will take extra long, up to about a minute. So if you're monitoring for a time-out, take that into account. - As you repeatedly open and print documents, it will slow down. I keep a document count, and quit & re-launch Word every 100 documents--of course launching more often means that launch problems happen more often. - Sometimes it just freezes. Sometimes it decides on quit to display a dialog about not being able to save the normal template or some other similar bullshit. So whenever you ask it to do something, you have to be prepared to time out and kill -9 it. The good news is that all of the above problems have gotten much less frequent with Word 2011, *and* Word 2011 completely eliminated the nastiness where Word 2004 would corrupt the old emulation layer so completely that it would not run at all until you rebooted the server (even killing and restarting the "blued" stuff would not do it). Enjoy yourself; I sure as hell did not ;-) -- Scott Ribe scott_ribe at elevated-dev.com http://www.elevated-dev.com/ (303) 722-0567 voice From reallfqq-nginx at yahoo.fr Wed Jan 29 00:30:30 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 29 Jan 2014 01:30:30 +0100 Subject: It's believed that SPDY is a huge DDoS vector by itself In-Reply-To: References: Message-ID: I think what you both request is interesting. However, I would like to push the analysis further. Is seems SPDY design is flawed because it enables flexibility and offer new features compared to HTTP without taking into account the very basis of a protocol: being efficient by allowing quick and inexpensive routing of its packets. Some other projects drafted towards HTTP/2.0 are made with efficiency in mind. One of them is called *HTTPbis* and has been first drated mid-2012 by 4 interesting guys: Willy Tarreau (HAProxy), Poul?Henning Kamp (Varnish), Adrien de Croy (WinGate) and Amos Jeffries (Squid). Look at that: 1 load-balancing guy, 1 cache one and 2 proxy ones... Those guys definitely want to avoid leveraging (D)Dos attacks! They coopareta with other teams (SPDY one being one of them), but I like the approach they took at the very beginning. Is the nginx team aware of that project? Does it seems interesting enough so nginx could support it in the near future? Or do you have any plans around HTTPbis? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From piotr at cloudflare.com Wed Jan 29 00:46:42 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 28 Jan 2014 16:46:42 -0800 Subject: It's believed that SPDY is a huge DDoS vector by itself In-Reply-To: References: Message-ID: Hey, > Some other projects drafted towards HTTP/2.0 are made with efficiency in > mind. > One of them is called HTTPbis and has been first drated mid-2012 by 4 > interesting guys: Willy Tarreau (HAProxy), Poul?Henning Kamp (Varnish), > Adrien de Croy (WinGate) and Amos Jeffries (Squid). > Look at that: 1 load-balancing guy, 1 cache one and 2 proxy ones... Those > guys definitely want to avoid leveraging (D)Dos attacks! HTTPbis isn't a protocol, it's the name of an IETF working group responsible developing and maintaining HTTP. What you're referring to is called "Network-Friendly HTTP Upgrade": http://tools.ietf.org/html/draft-tarreau-httpbis-network-friendly-00 But HTTPbis chose SPDY as a base for HTTP/2.0, so there is no point in adding support for all the proposed alternatives (even if they are indeed better). Best regards, Piotr Sikora From reallfqq-nginx at yahoo.fr Wed Jan 29 00:53:36 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 29 Jan 2014 01:53:36 +0100 Subject: It's believed that SPDY is a huge DDoS vector by itself In-Reply-To: References: Message-ID: OK, thanks for your lights on this. They chose to work with SPDY, right, but are their ideas being followed-up to SPDY? Or will their protocol stay on a parallel path? The problem would then be that SPDY is backed by a major networking actor which name start with a G... SPDY simply can't be the best protocol without being 'Network-friendly' (and could even be dangerous as a dormant bomb is). --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jan 29 19:26:53 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 29 Jan 2014 14:26:53 -0500 Subject: warning C4701 src\http\ngx_http_request.c(728) Message-ID: <2a55a249cd83a1e92199eab160053297.NginxMailingListEnglish@forum.nginx.org> src\http\ngx_http_request.c(728) : warning C4701: potentially uninitialized local variable 'data' used VC 2010. >From full source nginx-01e2a5bcdd8f Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246969,246969#msg-246969 From igor.sverkos at googlemail.com Wed Jan 29 19:52:21 2014 From: igor.sverkos at googlemail.com (Igor Sverkos) Date: Wed, 29 Jan 2014 20:52:21 +0100 Subject: Performance penalty when using regular expressions in server_name? Message-ID: Hi, let assume that we run the following domains: - agoodexample.org - agoodexmaple.com - agoodexample.net - a-good-example.org - a-good-example.com - a-good-example.net Our main site should be www.a-good-example.com -- people coming from any other mentioned domain/combination should be redirected to the main site. I could now write a single server block like server { // ... server_name ~^(?:www\.)?a[-]?good-example\.(?:org|net)$ ~^ a[-]?good-example\.com $; rewrite ^ $scheme://www.a-good-example.com $request_uri permanent; } I am using regular expressions as server_name values, I think you get the point. I could also avoid regular expressions: server { // ... server_name agoodexample.org a-good-example.org www.agoodexample.org www.a-good-example.org agoodexample.net a-good-example.net www.agoodexample.net www.a-good-example.net agoodexample.com www.agoodexample.com a-good-example.com; rewrite ^ $scheme://www.a-good-example.com $request_uri permanent; } Q1: Are both configurations equal or is there any performance difference? E.g. would you recommend one configuration more than the other? Q2: When using regular expressions in server_name, the regular expression will also appear in the error_log. Is it possible to see the actual name instead the expression in the log? I am using nginx-1.4.x with pcre-jit enabled. Thanks. -- Regards, Igor From sarah at nginx.com Wed Jan 29 20:08:25 2014 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 29 Jan 2014 12:08:25 -0800 Subject: a NGINX Summit 2/25 in San Francisco Message-ID: <4B6A427B-3D33-480A-BF91-59D13C5A7280@nginx.com> Hello all! I?d like to invite you to join the Nginx, Inc. team for our first User Summit February 25th at Dogpatch Studios in San Francisco. The highlights include 2 formal presentations by the NGINX FOSS project and Nginx, Inc. founder, Igor Sysoev, and well known module developer in the NGINX ecosystem, Yichun Zhang (@agentzh). And, we?re soliciting lightning talks from the community! If you have an interesting use case; a funny war story; a resounding success; or performance tricks you?d like to share please send a note to community-events at nginx.com Once we finish the scheduled program there will be time for socializing and connecting with the broader community including the Nginx Inc team. For those of you who are newer to NGINX we?ll additionally be road testing our brand new course ?NGINX Fundamentals? the morning of 2/25 at a friends-and-family rate. Logistics and specifics can be found here - http://bit.ly/NGINXsummit2014. But, space is limited (particularly for the training) so please sign up soon. Sarah P.S The phenomenal rise of NGINX is because of our supportive community. For us, the user summit is part of our commitment to connect with you, and if you are passionate about NGINX we?d love to have you join us. P.S. If you can?t join us this time, well, that?s sad. We?ll miss you. But, one of the things I?m doing in my new position with NGINX is making sure we have more community events. So, I?ll hope to meet you soon. From nginx-forum at nginx.us Wed Jan 29 20:15:46 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 29 Jan 2014 15:15:46 -0500 Subject: Performance penalty when using regular expressions in server_name? In-Reply-To: References: Message-ID: A different way would to use Lua and an alias table, for a dynamic site I use this to allow a site multiple aliases all directing to the same root, its a simple array with 2 lines for each main site, 1 as server_name and line 2 endless aliases. Its fast and no need to edit nginx.conf just edit a plain ascii file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246970,246973#msg-246973 From rvrv7575 at yahoo.com Thu Jan 30 05:05:34 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Thu, 30 Jan 2014 13:05:34 +0800 (SGT) Subject: How to control the order of execution modules in nginx Message-ID: <1391058334.67669.YahooMailNeo@web193503.mail.sg3.yahoo.com> From?http://www.evanmiller.org/nginx-modules-guide.html, "The order of their execution is determined at compile-time". Is there a way to control this. ngx_modules.c has the list and order of execution. How do I change the order of execution of modules within a particular phase e.g. if I have three modules within the access phase, is there a way to control the order of execution with the access phase.? -------------- next part -------------- An HTML attachment was scrubbed... URL: From rvrv7575 at yahoo.com Thu Jan 30 05:24:37 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Thu, 30 Jan 2014 13:24:37 +0800 (SGT) Subject: Will nginx decompress a compressed request Message-ID: <1391059477.95666.YahooMailNeo@web193501.mail.sg3.yahoo.com> >From this thread on the mailing list http://forum.nginx.org/read.php?11,96472,214266 , it appears that nginx does not support decompressing HTTP request from a client. The thread however is 2 years old and am wondering if there have been any changes. I have not found any though in the documentation. Thanks for any inputs -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Thu Jan 30 06:52:56 2014 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 30 Jan 2014 10:52:56 +0400 Subject: Performance penalty when using regular expressions in server_name? In-Reply-To: References: Message-ID: On Jan 29, 2014, at 23:52 , Igor Sverkos wrote: > Hi, > > let assume that we run the following domains: > > - agoodexample.org > - agoodexmaple.com > - agoodexample.net > - a-good-example.org > - a-good-example.com > - a-good-example.net > > Our main site should be www.a-good-example.com -- people coming from any > other mentioned domain/combination should be redirected to the main site. > > I could now write a single server block like > > server { > // ... > server_name ~^(?:www\.)?a[-]?good-example\.(?:org|net)$ > ~^ a[-]?good-example\.com $; > > rewrite ^ $scheme://www.a-good-example.com $request_uri permanent; > } > > I am using regular expressions as server_name values, I think you get the > point. > > I could also avoid regular expressions: > > server { > // ... > server_name agoodexample.org > a-good-example.org > www.agoodexample.org > www.a-good-example.org > agoodexample.net > a-good-example.net > www.agoodexample.net > www.a-good-example.net > agoodexample.com > www.agoodexample.com > a-good-example.com; > > rewrite ^ $scheme://www.a-good-example.com $request_uri permanent; > } > > > Q1: Are both configurations equal or is there any performance difference? > E.g. would you recommend one configuration more than the other? The second is faster but I believe performance is negligible. And it's better to use return 301 $scheme://www.a-good-example.com/$request_uri; > Q2: When using regular expressions in server_name, the regular expression > will also appear in the error_log. Is it possible to see the actual > name instead the expression in the log? > > > I am using nginx-1.4.x with pcre-jit enabled. $host. -- Igor Sysoev http://nginx.com From luky-37 at hotmail.com Thu Jan 30 11:34:56 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 30 Jan 2014 12:34:56 +0100 Subject: Will nginx decompress a compressed request In-Reply-To: <1391059477.95666.YahooMailNeo@web193501.mail.sg3.yahoo.com> References: <1391059477.95666.YahooMailNeo@web193501.mail.sg3.yahoo.com> Message-ID: Hi, > From this thread on the mailing list? > http://forum.nginx.org/read.php?11,96472,214266 , it appears that nginx? > does not support decompressing HTTP request from a client. The thread? > however is 2 years old and am wondering if there have been any changes.? > I have not found any though in the documentation.? > Thanks for any inputs? http://nginx.org/en/docs/http/ngx_http_gunzip_module.html From luky-37 at hotmail.com Thu Jan 30 11:37:59 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 30 Jan 2014 12:37:59 +0100 Subject: Will nginx decompress a compressed request In-Reply-To: References: <1391059477.95666.YahooMailNeo@web193501.mail.sg3.yahoo.com>, Message-ID: > > >> From this thread on the mailing list >> http://forum.nginx.org/read.php?11,96472,214266 , it appears that nginx >> does not support decompressing HTTP request from a client. The thread >> however is 2 years old and am wondering if there have been any changes. >> I have not found any though in the documentation. >> Thanks for any inputs > > http://nginx.org/en/docs/http/ngx_http_gunzip_module.html Sorry, this is about decompressing a compressed response, while you are asking about decompressing a compressed *request* body. I don't think this is possible currently. But why not let your application handle the decompression? Regards, Lukas From nginx-forum at nginx.us Thu Jan 30 11:50:24 2014 From: nginx-forum at nginx.us (nk86) Date: Thu, 30 Jan 2014 06:50:24 -0500 Subject: Nginx proxy with websocket not returning data Message-ID: I am trying to configure nginx proxy with websockets I am using jetty server with cometd framework(sends connect request to server every minute). I am able to do the websocket handshake and send my login request to the server but I am not getting any response back. Can you let me know what is wrong with my config file. worker_processes 1; events { worker_connections 1024; } http { include mime.types; server { listen 80; server_name test.abc.com; location / { proxy_pass http://110.90.22.202:8080; proxy_redirect off; #root /var/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header X-Nginx-Proxy true; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246996,246996#msg-246996 From lists at ruby-forum.com Thu Jan 30 11:56:42 2014 From: lists at ruby-forum.com (Ne Ka) Date: Thu, 30 Jan 2014 12:56:42 +0100 Subject: Nginx proxy with websocket not returning data Message-ID: <2f428bd7afb308ed734d213343494eae@ruby-forum.com> Hi, I am trying to configure nginx proxy with websockets I am using jetty server with cometd framework. I am able to do the websocket handshake and send my login request to the server but I am not getting any response back. Can you let me know what could be wrong with my config file. Here's my config file: worker_processes 1; events { worker_connections 1024; } http { include mime.types; server { listen 80; server_name test.abc.com; location / { proxy_pass http://110.90.22.202:8080; proxy_redirect off; #root /var/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header X-Nginx-Proxy true; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Thu Jan 30 12:08:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Jan 2014 16:08:39 +0400 Subject: How to control the order of execution modules in nginx In-Reply-To: <1391058334.67669.YahooMailNeo@web193503.mail.sg3.yahoo.com> References: <1391058334.67669.YahooMailNeo@web193503.mail.sg3.yahoo.com> Message-ID: <20140130120839.GV1835@mdounin.ru> Hello! On Thu, Jan 30, 2014 at 01:05:34PM +0800, Rv Rv wrote: > From?http://www.evanmiller.org/nginx-modules-guide.html, "The > order of their execution is determined at compile-time". Is > there a way to control this. ngx_modules.c has the list and > order of execution. How do I change the order of execution of > modules within a particular phase e.g. if I have three modules > within the access phase, is there a way to control the order of > execution with the access phase.? Order of execution of modules in a particular phase is set during a configure, modules which are last in HTTP_MODULES are executed first. That is, if you want to add "module1", "module2", and "module3", all of them use the access phase, and each of them does something like HTTP_MODULES="$HTTP_MODULES moduleN" in it's "config" file, you should do something like ./configure --add-module=module3 \ --add-module=module2 \ --add-module=module1 to get the "module1 -> module2 -> module3" order. Note that it may non-trivial to, e.g., make sure your module is called after standard modules in the access phase. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 30 12:12:33 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 30 Jan 2014 16:12:33 +0400 Subject: Nginx proxy with websocket not returning data In-Reply-To: <2f428bd7afb308ed734d213343494eae@ruby-forum.com> References: <2f428bd7afb308ed734d213343494eae@ruby-forum.com> Message-ID: <20140130121233.GW1835@mdounin.ru> Hello! On Thu, Jan 30, 2014 at 12:56:42PM +0100, Ne Ka wrote: > Hi, > > I am trying to configure nginx proxy with websockets I am using jetty > server with cometd framework. I am able to do the websocket handshake > and send my login request to the server but I am not getting any > response back. Can you let me know what could be wrong with my config > file. It would be interesting to know at least nginx version used, as weel as OS and "nginx -V" output. See http://wiki.nginx.org/Debugging for some more hints. Config looks fine and should work. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jan 30 12:25:28 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 30 Jan 2014 07:25:28 -0500 Subject: warning C4701 src\http\ngx_http_request.c(728) In-Reply-To: <2a55a249cd83a1e92199eab160053297.NginxMailingListEnglish@forum.nginx.org> References: <2a55a249cd83a1e92199eab160053297.NginxMailingListEnglish@forum.nginx.org> Message-ID: This patch, http://forum.nginx.org/read.php?29,246966,246988#msg-246988 fixes it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,246969,247001#msg-247001 From nginx-forum at nginx.us Thu Jan 30 13:40:52 2014 From: nginx-forum at nginx.us (zuckbin) Date: Thu, 30 Jan 2014 08:40:52 -0500 Subject: nginx as reverse-proxy for apache inside a VM Message-ID: Hi, I really don't know to solve this: http://forum.proxmox.com/threads/17680-nginx-apache I got a 504 Gateway time-out error I use proxmox as hypervisor, inside a CT, i want to use nginx as reverse proxy for static content I got apache listen on port 8080 and nginx on 80, when i browse my url, i got a 504 gateway time-out I'm lost... thank you for your help ++ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247002,247002#msg-247002 From nginx-forum at nginx.us Thu Jan 30 14:06:28 2014 From: nginx-forum at nginx.us (cracker1985) Date: Thu, 30 Jan 2014 09:06:28 -0500 Subject: Nginx 403 forbidden Message-ID: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> Hello everyone, I have Freebsd 8.2+squid+sarg+nginx server on my production server.I have problem to connect nginx internet users web reporting page. When i connect to this server via web browser get error : 403 forbidden. This my nginx-error logs: *8 directory index of "/home/sqstat/public_html/sarg/" is forbidden, client: x.x.x.x, server: localhost, request: "GET /sarg/ HTTP/1.1", host: "x.x.x.x:8081" This my nginx.conf : #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 12.0.3.233:8081; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root /home/sqstat/public_html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # #server { # listen 443; # server_name localhost; # ssl on; # ssl_certificate cert.pem; # ssl_certificate_key cert.key; # ssl_session_timeout 5m; # ssl_protocols SSLv2 SSLv3 TLSv1; # ssl_ciphers HIGH:!aNULL:!MD5; # ssl_prefer_server_ciphers on; # location / { # root html; # index index.html index.htm; # } #} This is /home/sqstat/public_html/sarg/ directory permission : drwxrwxrwx 220 root www 5120 Jan 17 18:57 01Oct2013-01Oct2013 drwxrwxrwx 52 root www 1536 Jan 17 18:30 01Sep2013-01Sep2013 drwxrwxrwx 218 root www 5120 Jan 17 18:58 02Oct2013-02Oct2013 drwxrwxrwx 197 root www 4608 Jan 17 18:32 02Sep2013-02Sep2013 drwxrwxrwx 148 root www 3584 Jan 17 18:09 03Aug2013-03Aug2013 drwxrwxrwx 219 root www 5120 Jan 17 18:59 03Oct2013-03Oct2013 drwxrwxrwx 187 root www 4608 Jan 17 18:33 03Sep2013-03Sep2013 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247009#msg-247009 From nginx-forum at nginx.us Thu Jan 30 14:16:03 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 30 Jan 2014 09:16:03 -0500 Subject: Nginx 403 forbidden In-Reply-To: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> References: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://www.nginxtips.com/how-to-enable-nginx-directory-listing/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247012#msg-247012 From nginx-forum at nginx.us Thu Jan 30 17:44:25 2014 From: nginx-forum at nginx.us (portoist) Date: Thu, 30 Jan 2014 12:44:25 -0500 Subject: How fastcgi temp works? Message-ID: <8e64dd82966dcc6e5ebf4cb509a746c4.NginxMailingListEnglish@forum.nginx.org> Hello everybody, We are using nginx with php-fpm. Our application works as download aggregator. We download files from 3rd party servers and send it to client while it's still being downloaded. It means that we can't use X-Accel-Redirect, since file isn't fully downloaded and X-Accel-Redirect sets Content-Length to actual file size. We set required headers in php and in cycle read chunks of file, prints it and flush it to output. Everything seems to work fine except large files (over 1GB) on slow download speeds. Downloading was interrupted at 1GB - it seemed like client has disconnected - calling connection_aborted in php returned true. I have found out that fastcgi_max_temp_file_size is 1GB by default - so I have increased it to 2GB. Now how exactly does fastcgi temp file works? Does every request has its own temp file? Or is there one common temp file? Thanks for hints! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247019,247019#msg-247019 From nginx-forum at nginx.us Thu Jan 30 17:53:07 2014 From: nginx-forum at nginx.us (cracker1985) Date: Thu, 30 Jan 2014 12:53:07 -0500 Subject: Nginx 403 forbidden In-Reply-To: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> References: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1cb49b81688672d25afe583a48ab9b59.NginxMailingListEnglish@forum.nginx.org> Thanks for posting here. But i don't understand what should be change ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247022#msg-247022 From nginx-forum at nginx.us Thu Jan 30 18:00:27 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 30 Jan 2014 13:00:27 -0500 Subject: Nginx 403 forbidden In-Reply-To: <1cb49b81688672d25afe583a48ab9b59.NginxMailingListEnglish@forum.nginx.org> References: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> <1cb49b81688672d25afe583a48ab9b59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <096705e04c1bd94cb0c426e9fdb2002b.NginxMailingListEnglish@forum.nginx.org> Maybe you should explain what you expect to happen, if you expect to see the contents then that link has the answer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247023#msg-247023 From nginx-forum at nginx.us Thu Jan 30 19:57:33 2014 From: nginx-forum at nginx.us (cracker1985) Date: Thu, 30 Jan 2014 14:57:33 -0500 Subject: Nginx 403 forbidden In-Reply-To: <096705e04c1bd94cb0c426e9fdb2002b.NginxMailingListEnglish@forum.nginx.org> References: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> <1cb49b81688672d25afe583a48ab9b59.NginxMailingListEnglish@forum.nginx.org> <096705e04c1bd94cb0c426e9fdb2002b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7aa3999e6d222d4d161afb67dcfb8d94.NginxMailingListEnglish@forum.nginx.org> I am new in linux , because i dont understand your answer. Can you write where is my problem exactly? What should i do ? Thank you for you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247028#msg-247028 From nginx-forum at nginx.us Thu Jan 30 20:21:06 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 30 Jan 2014 15:21:06 -0500 Subject: Nginx 403 forbidden In-Reply-To: <7aa3999e6d222d4d161afb67dcfb8d94.NginxMailingListEnglish@forum.nginx.org> References: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> <1cb49b81688672d25afe583a48ab9b59.NginxMailingListEnglish@forum.nginx.org> <096705e04c1bd94cb0c426e9fdb2002b.NginxMailingListEnglish@forum.nginx.org> <7aa3999e6d222d4d161afb67dcfb8d94.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9cc9c7e6299f91f1bf488bbc3355babf.NginxMailingListEnglish@forum.nginx.org> The error says "directory index of "/home/sqstat/public_html/sarg/" is forbidden" which means there are no html files there while it tries to display the contents which is only allowed when you add autoindex on; see that link I posted. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247032#msg-247032 From nginx-forum at nginx.us Thu Jan 30 20:34:16 2014 From: nginx-forum at nginx.us (cracker1985) Date: Thu, 30 Jan 2014 15:34:16 -0500 Subject: Nginx 403 forbidden In-Reply-To: <9cc9c7e6299f91f1bf488bbc3355babf.NginxMailingListEnglish@forum.nginx.org> References: <6cca3a80916baa85e8e6fb9136537804.NginxMailingListEnglish@forum.nginx.org> <1cb49b81688672d25afe583a48ab9b59.NginxMailingListEnglish@forum.nginx.org> <096705e04c1bd94cb0c426e9fdb2002b.NginxMailingListEnglish@forum.nginx.org> <7aa3999e6d222d4d161afb67dcfb8d94.NginxMailingListEnglish@forum.nginx.org> <9cc9c7e6299f91f1bf488bbc3355babf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0435d5f64aae6ef494ae110356871138.NginxMailingListEnglish@forum.nginx.org> Thank you . Fixed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247009,247033#msg-247033 From mdounin at mdounin.ru Fri Jan 31 01:43:40 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 31 Jan 2014 05:43:40 +0400 Subject: How fastcgi temp works? In-Reply-To: <8e64dd82966dcc6e5ebf4cb509a746c4.NginxMailingListEnglish@forum.nginx.org> References: <8e64dd82966dcc6e5ebf4cb509a746c4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140131014339.GG1835@mdounin.ru> Hello! On Thu, Jan 30, 2014 at 12:44:25PM -0500, portoist wrote: > Hello everybody, > We are using nginx with php-fpm. Our application works as download > aggregator. We download files from 3rd party servers and send it to client > while it's still being downloaded. It means that we can't use > X-Accel-Redirect, since file isn't fully downloaded and X-Accel-Redirect > sets Content-Length to actual file size. We set required headers in php and > in cycle read chunks of file, prints it and flush it to output. Everything > seems to work fine except large files (over 1GB) on slow download speeds. > Downloading was interrupted at 1GB - it seemed like client has disconnected > - calling connection_aborted in php returned true. I have found out that > fastcgi_max_temp_file_size is 1GB by default - so I have increased it to > 2GB. > Now how exactly does fastcgi temp file works? Does every request has its own > temp file? Or is there one common temp file? > Thanks for hints! Some hints can be found here: http://nginx.org/r/fastcgi_buffering Downloading is _not_ interrupted by nginx after reaching 1GB, but it will stop loading further data from a backend till already buffered data are sent. This may take a while if client connection is slow, leading to timeouts on backend's side. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jan 31 07:40:37 2014 From: nginx-forum at nginx.us (portoist) Date: Fri, 31 Jan 2014 02:40:37 -0500 Subject: How fastcgi temp works? In-Reply-To: <20140131014339.GG1835@mdounin.ru> References: <20140131014339.GG1835@mdounin.ru> Message-ID: <5ada4f44d41b5336ad9d1b877793409c.NginxMailingListEnglish@forum.nginx.org> Thank you for your answer. I will try how it will work with X-Accel-Buffering: no in headers. It is strange. From backend it seems like client disconnects. Calling php connection_aborted returns true. I am downloading file around 1.5GB with speed around 250KB/s. I have downloaded 600MB and I can see in my log from php, that I have disconnected. Also I know that file from 3rd party server is fully donwloaded - so there are data to send and that script uploaded 1GB. But my download is still running. When I download 1GB downloading stops. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247019,247046#msg-247046 From nginx-forum at nginx.us Fri Jan 31 20:35:33 2014 From: nginx-forum at nginx.us (Larry) Date: Fri, 31 Jan 2014 15:35:33 -0500 Subject: nginx -> Dns server ? Message-ID: <3d3d98b1ab254094cb2a753909110440.NginxMailingListEnglish@forum.nginx.org> Hello, I just read that nginx has a resolver. Will it be able to replace our powerdns which just enables the basics ds stuffs ? (lookup + ttl as usual) I rather prefer the nginx syntax and it would simplify the stack if we could throw out powerdns (which is good, that is not the question). Any clue/experience would be welcome :) Thanks, Larry Posted at Nginx Forum: http://forum.nginx.org/read.php?2,247062,247062#msg-247062 From bryanlharris at me.com Fri Jan 31 21:51:42 2014 From: bryanlharris at me.com (Bryan Harris) Date: Fri, 31 Jan 2014 15:51:42 -0600 Subject: nginx -> Dns server ? In-Reply-To: <3d3d98b1ab254094cb2a753909110440.NginxMailingListEnglish@forum.nginx.org> References: <3d3d98b1ab254094cb2a753909110440.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Jan 31, 2014, at 2:35 PM, Larry wrote: > Hello, > > I just read that nginx has a resolver. > > Will it be able to replace our powerdns which just enables the basics ds > stuffs ? (lookup + ttl as usual) > > I rather prefer the nginx syntax and it would simplify the stack if we could > throw out powerdns (which is good, that is not the question). > > Any clue/experience would be welcome :) I?m new to nginx but a quick google search shows me configuration options for how to resolve, rather than an implementation of a DNS server (I assume that?s what power dns is). http://wiki.nginx.org/HttpCoreModule#resolver http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver V/r, Bryan