From mdounin at mdounin.ru Mon Dec 1 12:53:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Dec 2014 15:53:32 +0300 Subject: $time_iso8601 is not set for invalid HTTP? In-Reply-To: <823cfd7a5b137e23caba440b94cbc88d.NginxMailingListEnglish@forum.nginx.org> References: <823cfd7a5b137e23caba440b94cbc88d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141201125332.GA24053@mdounin.ru> Hello! On Sat, Nov 29, 2014 at 03:48:24PM -0500, igorb wrote: > I use the if trick to get timestamped log names: > > if ($time_iso8601 ~ "^(\d{4})-(\d{2})") { > set $year $1; > set $month $2; > } > > access_log .../access-$year-$month.log combined; > > However, with nginx/1.4.6 (Ubuntu) I see that for invalid HTTP requests > $year and $month are not set. For example, with the above config after > issuing from a shell: > > ~> printf foo | nc host 80 > > I see in the error log for nginx: > > 2014/11/29 21:28:57 [warn] 8#0: *18 using uninitialized "year" variable > while logging request, client: 172.17.42.1, server: localhost, request: > "foo" > 2014/11/29 21:28:57 [warn] 8#0: *18 using uninitialized "month" variable > while logging request, client: 172.17.42.1, server: localhost, request: > "foo" > > That leads for the log file named .access--.log > > Is it because $time_iso8601 is only set for valid request? If so, is this > just a bug or a feature? The rewrite module directives are not executed unless an actual request handling happens, so this is expected result. Use map instead: map $time_iso8601 $year_and_month { "~^(?\d{4}-\d{2})" $temp; } Or, better, avoid using such "timestamped log names" at all as this approach implies unneeded overhead on opening/closing files for each request. It is much more efficient to use normal log rotation instead, see here: http://nginx.org/en/docs/control.html#logs Most OSes have newsyslog/logrotate utilities readily available to do this and trivial to configure. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Dec 1 13:23:53 2014 From: nginx-forum at nginx.us (igorb) Date: Mon, 01 Dec 2014 08:23:53 -0500 Subject: $time_iso8601 is not set for invalid HTTP? In-Reply-To: <20141201125332.GA24053@mdounin.ru> References: <20141201125332.GA24053@mdounin.ru> Message-ID: Maxim Dounin wrote: > Use map instead: Thanks, map works nicely :) > avoid using such "timestamped log names" at all as this approach implies unneeded overhead on opening/closing files for each request. I use open_log_file_cache to mitigate this. Are there still problems with that? I would prefer to keep things simple and avoid explicit log rotation. The latter requires in my case more setup over a simple Docker container with just nginx and extra log processing to extract monthly logs to feed those to webalizer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255226,255239#msg-255239 From mdounin at mdounin.ru Mon Dec 1 15:55:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 1 Dec 2014 18:55:12 +0300 Subject: $time_iso8601 is not set for invalid HTTP? In-Reply-To: References: <20141201125332.GA24053@mdounin.ru> Message-ID: <20141201155512.GG24053@mdounin.ru> Hello! On Mon, Dec 01, 2014 at 08:23:53AM -0500, igorb wrote: > > avoid using such "timestamped log names" at all as this approach implies > unneeded overhead on opening/closing files for each request. > > I use open_log_file_cache to mitigate this. Are there still problems with > that? I would prefer to keep things simple and avoid explicit log rotation. While open_log_file_cache mitigates most of the overhead involved, this still implies constructing a log file name for each request, looking up this name in the cache and so on. It's still better to use explicitly specified log file names. -- Maxim Dounin http://nginx.org/ From list_nginx at bluerosetech.com Mon Dec 1 16:24:28 2014 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Mon, 01 Dec 2014 08:24:28 -0800 Subject: $time_iso8601 is not set for invalid HTTP? In-Reply-To: References: <20141201125332.GA24053@mdounin.ru> Message-ID: <547C963C.50305@bluerosetech.com> On 12/1/2014 5:23 AM, igorb wrote: > Maxim Dounin wrote: > >> Use map instead: > > Thanks, map works nicely :) > >> avoid using such "timestamped log names" at all as this approach implies > unneeded overhead on opening/closing files for each request. > > I use open_log_file_cache to mitigate this. Are there still problems with > that? I would prefer to keep things simple and avoid explicit log rotation. > The latter requires in my case more setup over a simple Docker container > with just nginx and extra log processing to extract monthly logs to feed > those to webalizer. If you're using Docker, use the Docker log collector, use the syslog patch for nginx and have it log externally, or create a volume for the logs and a helper container in which you run the log tools. It really should not be nginx's or its container's job to manage/rotate logs. From wangsamp at gmail.com Mon Dec 1 17:40:48 2014 From: wangsamp at gmail.com (Oleksandr V. Typlyns'kyi) Date: Mon, 1 Dec 2014 19:40:48 +0200 (EET) Subject: $time_iso8601 is not set for invalid HTTP? In-Reply-To: <547C963C.50305@bluerosetech.com> References: <20141201125332.GA24053@mdounin.ru> <547C963C.50305@bluerosetech.com> Message-ID: Today Dec 1, 2014 at 08:24 Darren Pilgrim wrote: > If you're using Docker, use the Docker log collector, use the syslog patch for > nginx and have it log externally, or create a volume for the logs and a helper > container in which you run the log tools. It really should not be nginx's or > its container's job to manage/rotate logs. You doesn't need any patch for syslog. http://nginx.org/en/docs/syslog.html: Logging to syslog is available since version 1.7.1. -- WNGS-RIPE From nginx-forum at nginx.us Mon Dec 1 23:10:05 2014 From: nginx-forum at nginx.us (ibn) Date: Mon, 01 Dec 2014 18:10:05 -0500 Subject: what is wrong live stream http conf In-Reply-To: References: Message-ID: <4e46cbab3d0af0e88dcbf15fdfc22005.NginxMailingListEnglish@forum.nginx.org> Hey Baroc, I know it's now two years but maybe you found a solution for this problem? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,232525,255248#msg-255248 From nginx-forum at nginx.us Tue Dec 2 03:09:14 2014 From: nginx-forum at nginx.us (chase.holland) Date: Mon, 01 Dec 2014 22:09:14 -0500 Subject: NGINX module reporting 0 time to access.log Message-ID: <86e6c513353460f5de6715dd1b2bac6b.NginxMailingListEnglish@forum.nginx.org> Hey all, We've written a module to connect NGINX / OpenResty with ZMQ endpoints (https://github.com/neloe/ngx_zmq), but all requests through the module report a 0.000 time back to access.log, making it impossible to determine which of our subsystems is the problem area in terms of latency. Is there a flag or callback somewhere that needs to be implemented in order to report this time properly? Semi-related to the issue above... We're seeing an odd behaviour where we get idle CPUs but 2s-120s oscillating latencies. Flame graphs reveal we're spending most of our time in nginx's http handler and epoll waiting, but I'm not sure what else I expect them to say. Any help or advice is greatly appreciated. Thanks! - Chase Holland Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255254,255254#msg-255254 From jan.reges at siteone.cz Tue Dec 2 12:03:46 2014 From: jan.reges at siteone.cz (=?iso-8859-2?Q?Jan_Rege=B9?=) Date: Tue, 2 Dec 2014 12:03:46 +0000 Subject: How to setup Nginx as REALLY static-cache reverse proxy Message-ID: Hi, i love Nginx, but i have some specific problem. Nginx cache depends also on some browser-specific factors. In one project, we need to work with Nginx as "Static webpage mirror" for occasional outages or scheduled downtimes of primary server. 99% visitors just browsing this website and only 1% is working actively (fills some forms, etc.), so static mirror is for us important feature. Cookies can be totally ignored. Setup: - For example domain "domain.com" - Production public IP: 1.2.3.4, - Primary production server LAN IP (behind NAT): IP 10.0.0.1 (HTTP.. only Apache, without Nginx) - Secondary server with Nginx LAN IP (behind NAT): IP 10.0.0.2 (setup as reverse proxy for 10.0.0.1 with configured Nginx cache) Normal situation: - public IP is NAT-ed to 10.0.0.1 - on secondary server is in hosts record "10.0.0.2 domain.com www.domain.com" - on secondary server is crawler job, which every day crawl whole domain.com including images, styles, etc. - on secondary server is Nginx configured to save cache of all requests for 48 hours and ignore all cache-control-headers from primary server Primary server outage (expected state): - On router, NAT for 1.2.3.4 is changed from primary server IP 10.0.0.1 to secondary server 10.0.0.2 - Secondary server properly handle all GET/HEAD request from its static cache (and in this situation, is for GET/HEAD fully independent from primary server accessibility) What is a problem? Nginx cache works with some other factors and "proxy_cache_key" is not so unique ID, as i expected :) After i crawl this website by Google Chrome, for Google Chrome, cache from secondary server works great (all requests are with HIT state). But when i access to the same domain and same URL from other browser (iOS, Safari, Firefox, IE, Opera, Wget, Curl, etc.), Nginx cache show in log "MISS" for these requests and trying to load URL content from primary server (which is down, so it doesn't work). So, this static website works partially and just for some browsers, that was close to browser/crawler, which was crawling website to load into nginx cache. I found, that one of these factors is "Vary" header and after ignoring this header, it works better. But, there are still some other factors/header. Could you help me with it? :) I need to setup Nginx to be independent on browser headers and write/load cache really just for unique URL and request method. I know - there are factors like browser capabilities to handle content encoding, etc. and Nginx need to handle it properly. I just need to bring best efficiency of this solution to our client. For example, it's OK to have this static cache working without gzipping. Thank you for you help! Jan --- Below is my nginx configuration: proxy_cache_key "$scheme$request_method$host$request_uri "; proxy_cache_min_uses 1; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_revalidate off; proxy_http_version 1.1; proxy_next_upstream off; proxy_cache_lock on; proxy_cache_path /var/lib/nginx/tmp/cache/domain.com levels=1:2 keys_zone=domain_com:32m max_size=15G inactive=2880m loader_files=500 loader_threshold=500; server { listen 10.0.0.2:80; access_log /var/log/nginx/domain.com.access.log main buffer=64k; error_log /var/log/nginx/domain.com.error.log warn; root /usr/share/nginx/html; server_name www.domain.com domain.com; if ($request_method !~ ^(GET|HEAD)$ ) { return 503; } location / { proxy_cache domain_com; proxy_pass http://10.0.0.1:80; proxy_connect_timeout 3s; proxy_read_timeout 3s; proxy_send_timeout 3s; proxy_cache_valid any 2880m; proxy_ignore_headers Set-Cookie X-Accel-Expires Expires Cache-Control Vary; proxy_hide_header "Cache-Control"; proxy_hide_header "Set-Cookie"; proxy_hide_header "Vary"; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header HTTP_REMOTE_ADDR $remote_addr; proxy_set_header REMOTE_ADDR $remote_addr; proxy_set_header Accept-Encoding ""; # Deny compression in Apache add_header X-Proxy-Cache $upstream_cache_status; } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Dec 2 13:14:39 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 02 Dec 2014 16:14:39 +0300 Subject: NGINX module reporting 0 time to access.log In-Reply-To: <86e6c513353460f5de6715dd1b2bac6b.NginxMailingListEnglish@forum.nginx.org> References: <86e6c513353460f5de6715dd1b2bac6b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2401337.STXKSNoRgE@vbart-workstation> On Monday 01 December 2014 22:09:14 chase.holland wrote: > Hey all, > > We've written a module to connect NGINX / OpenResty with ZMQ endpoints > (https://github.com/neloe/ngx_zmq), but all requests through the module > report a 0.000 time back to access.log, making it impossible to determine > which of our subsystems is the problem area in terms of latency. Is there a > flag or callback somewhere that needs to be implemented in order to report > this time properly? [..] The whole code is one big problem. It just blocks a worker process on network IO and endless loops. You should never write code this way in nginx. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Dec 2 13:49:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 2 Dec 2014 16:49:32 +0300 Subject: nginx-1.7.8 Message-ID: <20141202134931.GT24053@mdounin.ru> Changes with nginx 1.7.8 02 Dec 2014 *) Change: now the "If-Modified-Since", "If-Range", etc. client request header lines are passed to a backend while caching if nginx knows in advance that the response will not be cached (e.g., when using proxy_cache_min_uses). *) Change: now after proxy_cache_lock_timeout nginx sends a request to a backend with caching disabled; the new directives "proxy_cache_lock_age", "fastcgi_cache_lock_age", "scgi_cache_lock_age", and "uwsgi_cache_lock_age" specify a time after which the lock will be released and another attempt to cache a response will be made. *) Change: the "log_format" directive can now be used only at http level. *) Feature: the "proxy_ssl_certificate", "proxy_ssl_certificate_key", "proxy_ssl_password_file", "uwsgi_ssl_certificate", "uwsgi_ssl_certificate_key", and "uwsgi_ssl_password_file" directives. Thanks to Piotr Sikora. *) Feature: it is now possible to switch to a named location using "X-Accel-Redirect". Thanks to Toshikuni Fukaya. *) Feature: now the "tcp_nodelay" directive works with SPDY connections. *) Feature: new directives in vim syntax highliting scripts. Thanks to Peter Wu. *) Bugfix: nginx ignored the "s-maxage" value in the "Cache-Control" backend response header line. Thanks to Piotr Sikora. *) Bugfix: in the ngx_http_spdy_module. Thanks to Piotr Sikora. *) Bugfix: in the "ssl_password_file" directive when using OpenSSL 0.9.8zc, 1.0.0o, 1.0.1j. *) Bugfix: alerts "header already sent" appeared in logs if the "post_action" directive was used; the bug had appeared in 1.5.4. *) Bugfix: alerts "the http output chain is empty" might appear in logs if the "postpone_output 0" directive was used with SSI includes. *) Bugfix: in the "proxy_cache_lock" directive with SSI subrequests. Thanks to Yichun Zhang. -- Maxim Dounin http://nginx.org/en/donation.html From kworthington at gmail.com Tue Dec 2 14:19:58 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 2 Dec 2014 09:19:58 -0500 Subject: [nginx-announce] nginx-1.7.8 In-Reply-To: <20141202134937.GU24053@mdounin.ru> References: <20141202134937.GU24053@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.8 for Windows http://goo.gl/lTlKVG (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 2, 2014 at 8:49 AM, Maxim Dounin wrote: > Changes with nginx 1.7.8 02 Dec > 2014 > > *) Change: now the "If-Modified-Since", "If-Range", etc. client request > header lines are passed to a backend while caching if nginx knows in > advance that the response will not be cached (e.g., when using > proxy_cache_min_uses). > > *) Change: now after proxy_cache_lock_timeout nginx sends a request to > a > backend with caching disabled; the new directives > "proxy_cache_lock_age", "fastcgi_cache_lock_age", > "scgi_cache_lock_age", and "uwsgi_cache_lock_age" specify a time > after which the lock will be released and another attempt to cache a > response will be made. > > *) Change: the "log_format" directive can now be used only at http > level. > > *) Feature: the "proxy_ssl_certificate", "proxy_ssl_certificate_key", > "proxy_ssl_password_file", "uwsgi_ssl_certificate", > "uwsgi_ssl_certificate_key", and "uwsgi_ssl_password_file" > directives. > Thanks to Piotr Sikora. > > *) Feature: it is now possible to switch to a named location using > "X-Accel-Redirect". > Thanks to Toshikuni Fukaya. > > *) Feature: now the "tcp_nodelay" directive works with SPDY > connections. > > *) Feature: new directives in vim syntax highliting scripts. > Thanks to Peter Wu. > > *) Bugfix: nginx ignored the "s-maxage" value in the "Cache-Control" > backend response header line. > Thanks to Piotr Sikora. > > *) Bugfix: in the ngx_http_spdy_module. > Thanks to Piotr Sikora. > > *) Bugfix: in the "ssl_password_file" directive when using OpenSSL > 0.9.8zc, 1.0.0o, 1.0.1j. > > *) Bugfix: alerts "header already sent" appeared in logs if the > "post_action" directive was used; the bug had appeared in 1.5.4. > > *) Bugfix: alerts "the http output chain is empty" might appear in logs > if the "postpone_output 0" directive was used with SSI includes. > > *) Bugfix: in the "proxy_cache_lock" directive with SSI subrequests. > Thanks to Yichun Zhang. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chetan at codigami.com Tue Dec 2 16:43:24 2014 From: chetan at codigami.com (Chetan Dhembre) Date: Tue, 2 Dec 2014 22:13:24 +0530 Subject: Buffering image in nginx memory server Message-ID: I have use case where my multi image upload api should be authenticated (from authentication server )before sending request to concern backend server. But during authentication images remains in memory of nginx server , in this setup I am seeing my latency of nginx fluctuates too much. How can I avoid this ? What is optimized way to handle this kind of situation ? - Chetan -------------- next part -------------- An HTML attachment was scrubbed... URL: From someukdeveloper at gmail.com Tue Dec 2 17:11:58 2014 From: someukdeveloper at gmail.com (Some Developer) Date: Tue, 02 Dec 2014 17:11:58 +0000 Subject: What HTTP headers does nginx send to SCGI and FastCGI application servers? Message-ID: <547DF2DE.4070701@googlemail.com> I've looked through the nginx source code and couldn't find a specific list of HTTP headers that nginx passes through to SCGI and FastCGI application servers. I'm writing a simple SCGI and FastCGI application server in C++ and need to know what set of options are available to use. Unfortunately I'm not in a position to run a debugger just yet as the library code is in the process of being created and I haven't got a daemon written yet to listen for connections. Is there a list somewhere? I've looked through the CGI 1.1 specification and it mentions several headers that are sent but I'm pretty sure that nginx sends other headers as well. From r1ch+nginx at teamliquid.net Tue Dec 2 18:15:52 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 2 Dec 2014 19:15:52 +0100 Subject: What HTTP headers does nginx send to SCGI and FastCGI application servers? In-Reply-To: <547DF2DE.4070701@googlemail.com> References: <547DF2DE.4070701@googlemail.com> Message-ID: Rather than the source, you should check the docs :) http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#parameters On Tue, Dec 2, 2014 at 6:11 PM, Some Developer wrote: > I've looked through the nginx source code and couldn't find a specific > list of HTTP headers that nginx passes through to SCGI and FastCGI > application servers. > > I'm writing a simple SCGI and FastCGI application server in C++ and need > to know what set of options are available to use. > > Unfortunately I'm not in a position to run a debugger just yet as the > library code is in the process of being created and I haven't got a daemon > written yet to listen for connections. > > Is there a list somewhere? I've looked through the CGI 1.1 specification > and it mentions several headers that are sent but I'm pretty sure that > nginx sends other headers as well. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From someukdeveloper at gmail.com Tue Dec 2 20:41:36 2014 From: someukdeveloper at gmail.com (Some Developer) Date: Tue, 02 Dec 2014 20:41:36 +0000 Subject: What HTTP headers does nginx send to SCGI and FastCGI application servers? In-Reply-To: References: <547DF2DE.4070701@googlemail.com> Message-ID: <547E2400.1000206@googlemail.com> On 02/12/14 18:15, Richard Stanway wrote: > Rather than the source, you should check the docs :) > > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html#parameters Not entirely what I meant but thanks for the link. I guess I'll just write my daemon and take a dump of all the headers manually to see exactly what to expect. At least then I can see how the library will perform under real world conditions. From nginx-forum at nginx.us Tue Dec 2 20:57:44 2014 From: nginx-forum at nginx.us (chase.holland) Date: Tue, 02 Dec 2014 15:57:44 -0500 Subject: NGINX module reporting 0 time to access.log In-Reply-To: <2401337.STXKSNoRgE@vbart-workstation> References: <2401337.STXKSNoRgE@vbart-workstation> Message-ID: <4b22c19b9f85ed547a151589bdca9fa2.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Monday 01 December 2014 22:09:14 chase.holland wrote: > > Hey all, > > > > We've written a module to connect NGINX / OpenResty with ZMQ > endpoints > > (https://github.com/neloe/ngx_zmq), but all requests through the > module > > report a 0.000 time back to access.log, making it impossible to > determine > > which of our subsystems is the problem area in terms of latency. Is > there a > > flag or callback somewhere that needs to be implemented in order to > report > > this time properly? > [..] > > The whole code is one big problem. It just blocks a worker process on > network > IO and endless loops. You should never write code this way in nginx. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi Valentine, Thank you for your quick response! Could you be more specific on what is blocking the worker process? Does an instance of an nginx module only get one context per process (thus we need to do our own threading), or is each instance of the module thread-based to match nginx's threads? Thanks! - Chase Holland Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255254,255276#msg-255276 From cole.putnamhill at comcast.net Tue Dec 2 21:35:44 2014 From: cole.putnamhill at comcast.net (Cole Tierney) Date: Tue, 2 Dec 2014 16:35:44 -0500 Subject: What HTTP headers does nginx send to SCGI and FastCGI application servers? In-Reply-To: References: Message-ID: On Tue, 02 Dec 2014 20:41:36 +0000, Some Developer wrote: > I guess I'll just write my daemon and take a dump of all the headers > manually to see exactly what to expect. At least then I can see how the > library will perform under real world conditions. You could try something similar with netcat. The following will listen on port 8111 and print what it receives: nc -l 8111 -- Cole From agentzh at gmail.com Tue Dec 2 22:16:47 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 2 Dec 2014 14:16:47 -0800 Subject: NGINX module reporting 0 time to access.log In-Reply-To: <4b22c19b9f85ed547a151589bdca9fa2.NginxMailingListEnglish@forum.nginx.org> References: <2401337.STXKSNoRgE@vbart-workstation> <4b22c19b9f85ed547a151589bdca9fa2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Tue, Dec 2, 2014 at 12:57 PM, chase.holland wrote: > Thank you for your quick response! Could you be more specific on what is > blocking the worker process? Just FYI: you can always find the blocking IO calls via tools like the off-CPU flame graphs: https://github.com/openresty/nginx-systemtap-toolkit#sample-bt-off-cpu > Does an instance of an nginx module only get > one context per process (thus we need to do our own threading), or is each > instance of the module thread-based to match nginx's threads? > Nginx worker processes are single-threaded. For any I/O operations, you should use nonblocking IO mode and solely rely on the nginx event loop to dispatch any IO events for your fd's. Otherwise your nginx will just degrade to the Apache's prefork mpm. (BTW, use of your own OS threads to work around such things will just degrade your nginx's performance to Apache's worker mpm.) Regards, -agentzh From piotr.sikora at frickle.com Wed Dec 3 08:12:15 2014 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Wed, 3 Dec 2014 09:12:15 +0100 Subject: [ANNOUNCE] ngx_cache_purge-2.2 Message-ID: Version 2.2 is now available at: http://labs.frickle.com/nginx_ngx_cache_purge/ GitHub repository is available at: https://github.com/FRiCKLE/ngx_cache_purge/ Changes: 2014-12-02 VERSION 2.2 * Fix compatibility with nginx-1.7.8+. 2014-05-19 * Fix build on Solaris with SunCC (Solaris Studio). Reported by Jussi Sallinen. Best regards, Piotr Sikora From someukdeveloper at gmail.com Wed Dec 3 13:51:25 2014 From: someukdeveloper at gmail.com (Some Developer) Date: Wed, 03 Dec 2014 13:51:25 +0000 Subject: What HTTP headers does nginx send to SCGI and FastCGI application servers? In-Reply-To: References: Message-ID: <547F155D.3000108@googlemail.com> On 02/12/14 21:35, Cole Tierney wrote: > On Tue, 02 Dec 2014 20:41:36 +0000, Some Developer wrote: >> I guess I'll just write my daemon and take a dump of all the headers >> manually to see exactly what to expect. At least then I can see how the >> library will perform under real world conditions. > > You could try something similar with netcat. The following will listen on port 8111 and print what it receives: > > nc -l 8111 > > -- > Cole Cool! Didn't know about that. Thanks for the tip. From nginx-forum at nginx.us Wed Dec 3 15:47:01 2014 From: nginx-forum at nginx.us (erankor2) Date: Wed, 03 Dec 2014 10:47:01 -0500 Subject: Custom compilation flag for an nginx module Message-ID: <8396a20bfd2b5798bbe65b0490c57e61.NginxMailingListEnglish@forum.nginx.org> Hi all, Is it possible for an nginx module to define custom compilation switches that add external libs / preprocessor macros ? Is there some example of a module that does it ? Specifically, what I'm trying to do is measure time accurately in my module for benchmarking purposes. Since I use Linux, I was planning to use clock_gettime(CLOCK_MONOTONIC) for that. This function is Linux-specific, and so I would like to make it optional at compilation time: * if nginx is compile with --with-clock-gettime (or something like that) I will use clock_gettime and link against the required library (librt) * otherwise, I will fall back to using gettimeofday or drop the feature altogether Thanks, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255292,255292#msg-255292 From mdounin at mdounin.ru Wed Dec 3 16:04:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Dec 2014 19:04:57 +0300 Subject: Custom compilation flag for an nginx module In-Reply-To: <8396a20bfd2b5798bbe65b0490c57e61.NginxMailingListEnglish@forum.nginx.org> References: <8396a20bfd2b5798bbe65b0490c57e61.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141203160457.GC24053@mdounin.ru> Hello! On Wed, Dec 03, 2014 at 10:47:01AM -0500, erankor2 wrote: > Hi all, > > Is it possible for an nginx module to define custom compilation switches > that add external libs / preprocessor macros ? Is there some example of a > module that does it ? No. > Specifically, what I'm trying to do is measure time accurately in my module > for benchmarking purposes. Since I use Linux, I was planning to use > clock_gettime(CLOCK_MONOTONIC) for that. This function is Linux-specific, > and so I would like to make it optional at compilation time: > * if nginx is compile with --with-clock-gettime (or something like that) I > will use clock_gettime and link against the required library (librt) > * otherwise, I will fall back to using gettimeofday or drop the feature > altogether It may be better to just detect if the function is available on a system in your module config script, much like it's done for many other functions in auto/unix script. Note well that clock_gettime() isn't Linux-specific, it's in POSIX: http://pubs.opengroup.org/onlinepubs/9699919799/functions/clock_gettime.html Need for librt is Linux-specific though. It can be easily tested as well, and there are couple of feature tests in auto/unix which do such tests for other functions. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 3 16:28:28 2014 From: nginx-forum at nginx.us (erankor2) Date: Wed, 03 Dec 2014 11:28:28 -0500 Subject: Custom compilation flag for an nginx module In-Reply-To: <20141203160457.GC24053@mdounin.ru> References: <20141203160457.GC24053@mdounin.ru> Message-ID: Thank you Maxim, your suggestion will definitely work for me. Are you familiar with any simple "non-core" module that does it ? I will need to test for the existence of this function or the need to explicitly add librt, and update CORE_LIBS accordingly + define some preprocessor macro that I can later #ifdef on in my code. I googled this up: https://github.com/bigplum/nginx-tcp-lua-module/blob/master/config but don't really understand what is going on there :) Thanks, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255292,255294#msg-255294 From cubicdaiya at gmail.com Wed Dec 3 17:45:41 2014 From: cubicdaiya at gmail.com (Tatsuhiko Kubo) Date: Thu, 4 Dec 2014 02:45:41 +0900 Subject: nginx-build Message-ID: Hello! As I developed a convenient build-tool for nginx, I will introduce it. https://github.com/cubicdaiya/nginx-build nginx-build provides a command to nginx seamlessly. If you are interested in it, please see the slide below. https://speakerdeck.com/cubicdaiya/nginx-build -- Tatsuhiko Kubo E-Mail: cubicdaiya at gmail.com Profile: http://cccis.jp/ Github: https://github.com/cubicdaiya From mdounin at mdounin.ru Wed Dec 3 18:42:15 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 3 Dec 2014 21:42:15 +0300 Subject: Custom compilation flag for an nginx module In-Reply-To: References: <20141203160457.GC24053@mdounin.ru> Message-ID: <20141203184215.GE24053@mdounin.ru> Hello! On Wed, Dec 03, 2014 at 11:28:28AM -0500, erankor2 wrote: > Thank you Maxim, your suggestion will definitely work for me. > > Are you familiar with any simple "non-core" module that does it ? I will > need to test for the existence of this function or the need to explicitly > add librt, and update CORE_LIBS accordingly + define some preprocessor macro > that I can later #ifdef on in my code. The code as in feature tests in auto/unix is more or less the same you'll need in your module config file, e.g.: ngx_feature="sched_yield()" ngx_feature_name="NGX_HAVE_SCHED_YIELD" ngx_feature_run=no ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs= ngx_feature_test="sched_yield()" . auto/feature if [ $ngx_found != yes ]; then ngx_feature="sched_yield() in librt" ngx_feature_libs="-lrt" . auto/feature if [ $ngx_found = yes ]; then CORE_LIBS="$CORE_LIBS -lrt" fi fi Replacing sched_yield() here with clock_gettime() as appropriate should do the trick. Note well that the whole configure system is in shell, so it's easy to read and understand. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 4 01:25:46 2014 From: nginx-forum at nginx.us (Fry-kun) Date: Wed, 03 Dec 2014 20:25:46 -0500 Subject: Need help setting up cache for failover Message-ID: <7272a3ff42c14ca5bd131e948cc7c60a.NginxMailingListEnglish@forum.nginx.org> I'm trying to configure my sites to failover to fastcgi_cache when backends are unavailable -- but at the same time I want to return nginx errors (hiding backend errors) Here's a simplified version of my current config: fastcgi_cache_path /dev/shm/nginx_fastcgi_cache levels=1:2 inactive=3d keys_zone=mycache:100m max_size=5000m; fastcgi_cache_use_stale error http_500 http_503 timeout updating; fastcgi_cache_valid 200 5m; fastcgi_cache_valid 404 1m; proxy_intercept_errors on; server { server_name domain.com root /var/www/domain.com; location / { try_files $uri @hhvm_backends; } location @hhvm_backends { fastcgi_pass backend-nodes; # upstream hhvm backends fastcgi_cache mycache; ... } error_page 404 @404; error_page 500 @500; location @404 { echo "404: file not found!"; } location @500 { return 500; } # default nginx error page } Right now, if the server is down and location is stale in cache, I get the default nginx 500 error page. According to debug log, the problem with this one is that error_page handling takes over before fastcgi_cache_use_stale has a chance to do its thing. Is there an easy way to fix this? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255298,255298#msg-255298 From reallfqq-nginx at yahoo.fr Thu Dec 4 08:42:50 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 4 Dec 2014 09:42:50 +0100 Subject: Need help setting up cache for failover In-Reply-To: <7272a3ff42c14ca5bd131e948cc7c60a.NginxMailingListEnglish@forum.nginx.org> References: <7272a3ff42c14ca5bd131e948cc7c60a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Quick quesiton; I see you are using proxy_intercept_errors. Should not you be using fastcgi_intercept_errors ? --- *B. R.* On Thu, Dec 4, 2014 at 2:25 AM, Fry-kun wrote: > I'm trying to configure my sites to failover to fastcgi_cache when backends > are unavailable -- but at the same time I want to return nginx errors > (hiding backend errors) > > Here's a simplified version of my current config: > > fastcgi_cache_path /dev/shm/nginx_fastcgi_cache levels=1:2 inactive=3d > keys_zone=mycache:100m max_size=5000m; > fastcgi_cache_use_stale error http_500 http_503 timeout updating; > fastcgi_cache_valid 200 5m; > fastcgi_cache_valid 404 1m; > proxy_intercept_errors on; > server { > server_name domain.com > root /var/www/domain.com; > location / { > try_files $uri @hhvm_backends; > } > location @hhvm_backends { > fastcgi_pass backend-nodes; # upstream hhvm backends > fastcgi_cache mycache; > ... > } > error_page 404 @404; > error_page 500 @500; > location @404 { echo "404: file not found!"; } > location @500 { return 500; } # default nginx error page > } > > > Right now, if the server is down and location is stale in cache, I get the > default nginx 500 error page. > According to debug log, the problem with this one is that error_page > handling takes over before fastcgi_cache_use_stale has a chance to do its > thing. > > Is there an easy way to fix this? > > Thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255298,255298#msg-255298 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Dec 4 14:08:41 2014 From: nginx-forum at nginx.us (erankor2) Date: Thu, 04 Dec 2014 09:08:41 -0500 Subject: Custom compilation flag for an nginx module In-Reply-To: <20141203184215.GE24053@mdounin.ru> References: <20141203184215.GE24053@mdounin.ru> Message-ID: Thanks Maxim, that's very helpful, it works great Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255292,255302#msg-255302 From nginx-forum at nginx.us Thu Dec 4 17:09:39 2014 From: nginx-forum at nginx.us (Fry-kun) Date: Thu, 04 Dec 2014 12:09:39 -0500 Subject: Need help setting up cache for failover In-Reply-To: References: Message-ID: <720894d5b8b95d0afb1f4fa6ff046814.NginxMailingListEnglish@forum.nginx.org> I am; that was a copy/paste error. The original config is around 3000 lines, it was easier to type & copy/paste than cleaning up unnecessary lines. ~Konstantin B.R. Wrote: ------------------------------------------------------- > Quick quesiton; I see you are using proxy_intercept_errors. Should not > you > be using fastcgi_intercept_errors > tercept_errors> > ? > --- > *B. R.* > > On Thu, Dec 4, 2014 at 2:25 AM, Fry-kun wrote: > > > I'm trying to configure my sites to failover to fastcgi_cache when > backends > > are unavailable -- but at the same time I want to return nginx > errors > > (hiding backend errors) > > > > Here's a simplified version of my current config: > > > > fastcgi_cache_path /dev/shm/nginx_fastcgi_cache levels=1:2 > inactive=3d > > keys_zone=mycache:100m max_size=5000m; > > fastcgi_cache_use_stale error http_500 http_503 timeout updating; > > fastcgi_cache_valid 200 5m; > > fastcgi_cache_valid 404 1m; > > proxy_intercept_errors on; > > server { > > server_name domain.com > > root /var/www/domain.com; > > location / { > > try_files $uri @hhvm_backends; > > } > > location @hhvm_backends { > > fastcgi_pass backend-nodes; # upstream hhvm backends > > fastcgi_cache mycache; > > ... > > } > > error_page 404 @404; > > error_page 500 @500; > > location @404 { echo "404: file not found!"; } > > location @500 { return 500; } # default nginx error page > > } > > > > > > Right now, if the server is down and location is stale in cache, I > get the > > default nginx 500 error page. > > According to debug log, the problem with this one is that error_page > > handling takes over before fastcgi_cache_use_stale has a chance to > do its > > thing. > > > > Is there an easy way to fix this? > > > > Thanks > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,255298,255298#msg-255298 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255298,255307#msg-255307 From reallfqq-nginx at yahoo.fr Thu Dec 4 19:25:02 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 4 Dec 2014 20:25:02 +0100 Subject: Need help setting up cache for failover In-Reply-To: <720894d5b8b95d0afb1f4fa6ff046814.NginxMailingListEnglish@forum.nginx.org> References: <720894d5b8b95d0afb1f4fa6ff046814.NginxMailingListEnglish@forum.nginx.org> Message-ID: I notice error_page is used at server level while fastcgi_cache_use_stale is at http level. error_page has thus higher precedence than the last. I would give a shot at putting both at the same level and see what happens. I trust error_page is some kind of 'last resort' feature, handling an error which is considered as such. Since fastcgi_cache_use_stale might triggers on errors coming from the backend, I would say it should filter them before they are considered by nginx as an error (and thus being processed as such, as the *_intercept_errors directive do). So the only reason I see error_page being triggered before fastcgi_cache_use_stale would be the higher precedence of the server environment over the http one. Test, test, test. :o) --- *B. R.* On Thu, Dec 4, 2014 at 6:09 PM, Fry-kun wrote: > I am; that was a copy/paste error. The original config is around 3000 > lines, > it was easier to type & copy/paste than cleaning up unnecessary lines. > > > ~Konstantin > > > B.R. Wrote: > ------------------------------------------------------- > > Quick quesiton; I see you are using proxy_intercept_errors. Should not > > you > > be using fastcgi_intercept_errors > > > tercept_errors> > > ? > > --- > > *B. R.* > > > > On Thu, Dec 4, 2014 at 2:25 AM, Fry-kun wrote: > > > > > I'm trying to configure my sites to failover to fastcgi_cache when > > backends > > > are unavailable -- but at the same time I want to return nginx > > errors > > > (hiding backend errors) > > > > > > Here's a simplified version of my current config: > > > > > > fastcgi_cache_path /dev/shm/nginx_fastcgi_cache levels=1:2 > > inactive=3d > > > keys_zone=mycache:100m max_size=5000m; > > > fastcgi_cache_use_stale error http_500 http_503 timeout updating; > > > fastcgi_cache_valid 200 5m; > > > fastcgi_cache_valid 404 1m; > > > proxy_intercept_errors on; > > > server { > > > server_name domain.com > > > root /var/www/domain.com; > > > location / { > > > try_files $uri @hhvm_backends; > > > } > > > location @hhvm_backends { > > > fastcgi_pass backend-nodes; # upstream hhvm backends > > > fastcgi_cache mycache; > > > ... > > > } > > > error_page 404 @404; > > > error_page 500 @500; > > > location @404 { echo "404: file not found!"; } > > > location @500 { return 500; } # default nginx error page > > > } > > > > > > > > > Right now, if the server is down and location is stale in cache, I > > get the > > > default nginx 500 error page. > > > According to debug log, the problem with this one is that error_page > > > handling takes over before fastcgi_cache_use_stale has a chance to > > do its > > > thing. > > > > > > Is there an easy way to fix this? > > > > > > Thanks > > > > > > Posted at Nginx Forum: > > > http://forum.nginx.org/read.php?2,255298,255298#msg-255298 > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255298,255307#msg-255307 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Dec 5 08:58:12 2014 From: nginx-forum at nginx.us (bcx) Date: Fri, 05 Dec 2014 03:58:12 -0500 Subject: upstream prematurely closed connection while reading response header from upstream In-Reply-To: <54410D5E.7040407@gmail.com> References: <54410D5E.7040407@gmail.com> Message-ID: <3464ee6131b281b7f1e78ac6f6853f49.NginxMailingListEnglish@forum.nginx.org> Hi Jiri, I'm experiencing similar difficulties. What does your upstream server configuration look like? What did you do to fix your problem? In my tcpdumps I'm not seeing the 0 byte chunk that should be at the end of a request. My upstreams are running Apache2 2.2.22 (debian) and PHP 5.5.13. Greetings, Casper Posted at Nginx Forum: http://forum.nginx.org/read.php?2,254031,255318#msg-255318 From nginx-forum at nginx.us Sat Dec 6 23:49:44 2014 From: nginx-forum at nginx.us (MasterTH) Date: Sat, 06 Dec 2014 18:49:44 -0500 Subject: Proxy Cache-Setting Message-ID: Hi, i got a special proxy cache configuration to do and i really don't know how to solve it. the situation is the following. We use an upstream proxy to be high availible with our project. The project is a api which uses get und post-data to calculate something. the caching is working nice and smoothly, but now we'd like to cut out some special calls from the cache. i'll try to explain which calls we'd like to cache and which not. lets say the base url is http://api.domain.tld/calculate everthing that is behind that base will be send to the api and put into the calculation process. What we'd like to cache is something like: http://api.domain.tld/calculate/%CUSTOMER_ID%/ (and everything what comes behind that url) And this calls we doesn't like to cache: http://api.domain.tld/calculate/?calc=23+45 http://api.domain.tld/calculate?calc=23*45 i tested really much, with different location settings, but everytime i add the location /calculation/ the POST Requests gets a 404 and nothing works anymore. If somebody has a oppinion how i can solve this problem, it would be really nice to hear. thanks a lot masterth Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255357,255357#msg-255357 From rpaprocki at fearnothingproductions.net Sat Dec 6 23:52:33 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 06 Dec 2014 15:52:33 -0800 Subject: Proxy Cache-Setting In-Reply-To: References: Message-ID: <548396C1.7070102@fearnothingproductions.net> Hi, You probably want to look into the proxy_cache_bypass and proxy_no_cache directives: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_bypass http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_no_cache On 12/06/2014 03:49 PM, MasterTH wrote: > Hi, > > i got a special proxy cache configuration to do and i really don't know how > to solve it. > > the situation is the following. We use an upstream proxy to be high > availible with our project. The project is a api which uses get und > post-data to calculate something. > > the caching is working nice and smoothly, but now we'd like to cut out some > special calls from the cache. > > i'll try to explain which calls we'd like to cache and which not. > > lets say the base url is http://api.domain.tld/calculate > everthing that is behind that base will be send to the api and put into the > calculation process. > > What we'd like to cache is something like: > http://api.domain.tld/calculate/%CUSTOMER_ID%/ (and everything what comes > behind that url) > > And this calls we doesn't like to cache: > http://api.domain.tld/calculate/?calc=23+45 > http://api.domain.tld/calculate?calc=23*45 > > i tested really much, with different location settings, but everytime i add > the location > /calculation/ the POST Requests gets a 404 and nothing works anymore. > > If somebody has a oppinion how i can solve this problem, it would be really > nice to hear. > > > thanks a lot > masterth > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255357,255357#msg-255357 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From francis at daoine.org Sat Dec 6 23:58:32 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 6 Dec 2014 23:58:32 +0000 Subject: Proxy Cache-Setting In-Reply-To: References: Message-ID: <20141206235832.GD15670@daoine.org> On Sat, Dec 06, 2014 at 06:49:44PM -0500, MasterTH wrote: Hi there, > What we'd like to cache is something like: > http://api.domain.tld/calculate/%CUSTOMER_ID%/ (and everything what comes > behind that url) > > And this calls we doesn't like to cache: > http://api.domain.tld/calculate/?calc=23+45 > http://api.domain.tld/calculate?calc=23*45 It looks like a fairly straightforward three-location system, since you have three url types that you want handled differently. location = /calculate { proxy_pass...; } location = /calculate/ { proxy_pass...; } location /calculate/ { proxy_pass...; proxy_cache...; } I guess that if you already proxy_cache everything, set it to "off" in the two exact locations. f -- Francis Daly francis at daoine.org From agentzh at gmail.com Sun Dec 7 02:30:08 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 6 Dec 2014 18:30:08 -0800 Subject: [ANN] OpenResty 1.7.7.1 released Message-ID: Hi folks! I am happy to announce the new formal release, 1.7.7.1, of the OpenResty bundle: http://openresty.org/#Download In this release, we include many important bug fixes in many components and some small new features as usual. One highlight of this release is that the Lua call "ngx.flush(true)" can now work with nginx core's "limit_rate" directive and $limit_rate variable. Special thanks go to all our contributors and users for making this happen! Below is the complete change log for this release, as compared to the last formal release (1.7.4.1): * upgraded the Nginx core to 1.7.7. * see the changes here: * bugfix: applied a patch to the nginx core to fix the memory invalid reads when exceeding the pre-configured limits in an "ngx_hash_t" hash table. * bugfix: applied a patch to the nginx core to fix a memory invalid read regression introduced in nginx 1.7.5+'s resolver. * ./configure: usage text: renamed "--with-luajit=PATH" to "--with-luajit=DIR". thanks Dominic for the suggestion. * feature: ./configure: added the default prefix value to the usage text. * upgraded LuaJIT to v2.1-20141128: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest changes: * feature: FFI: added "ffi.typeinfo()". thanks to Peter Colberg. * bugfix: fixed snapshot #0 handling for traces with a stack check on entry. this bug might lead to bad register overwrites (and eventually segmentation faults in GC upon trace exits, at least). * bugfix: FFI: no meta fallback when indexing pointer to incomplete struct. * bugfix: fixed fused constant loads under high register pressure. * bugfix: fixed DragonFly build (unsupported). thanks to Robin Hahling, Alex Hornung, and Joris Giovannangeli. * bugfix: FFI: fixed initialization of unions of subtypes. thanks to Peter Colberg. * bugfix: FFI: Fix for cdata vs. non-cdata arithmetic and comparisons. thanks to Roman Tsisyk. * optimize: eliminated hmask guard for forwarded HREFK. * debugging: added an (expensive) assertion to check GC objects in current stack upon trace exiting. thanks Mike Pall. only enabled when building with "-DLUA_USE_ASSERT". * upgraded the ngx_lua module to 0.9.13. * optimize: reduced the pool size of a fake connection from the default pool size (16KB) to 128B, affecting init_worker_by_lua and ngx.timer.at. * optimize: made fake requests share their connection pools, affecting init_worker_by_lua and ngx.timer.at. * feature: the error logger used by ngx.timer.at handlers now outputs the "client: xxx, server: xxx" context info for the original (true) request creating the timer. * feature: added nginx configuration file names and line numbers to the rewrite/access/content/log_by_lua directives' Lua chunk names in order to simplify debugging. * feature: ngx.flush(true) now returns the "timeout" and "client aborted" errors to the Lua land for the cases that writing to the client is timed out or the client closes the connection prematurely, respectively. * feature: ngx.flush(true) can now wait on delayed events due to nginx's limit_rate config directive or $limit_rate variable settings. thanks Shafreeck Sea for the original patch. * bugfix: ngx.flush(), ngx.eof(), and some other things did not update busy/free chains after calling the output filters. * bugfix: ngx_gzip/ngx_gunzip module filters might cause ngx.flush(true) to hang until timeout for nginx 1.7.7+ (and some other old versions of nginx). thanks Maxim Dounin for the help. * bugfix: ngx.get_phase() did not work in the context of init_worker_by_lua*. * bugfix: use of ngx.flush(true) with the limit_rate config directive or the $limit_rate variable may hang the request forever for large volumn of output data. thanks Shafreeck Sea for the report. * bugfix: compilation error when PCRE is disabled in the nginx build. thanks Ivan Cekov for the report. * bugfix: when syslog was enabled in the error_log directive for nginx 1.7.1+, use of init_worker_by_lua or ngx.timer.at() would lead to segmentation faults. thanks shun.zhang for the report. * bugfix: fixed compilation error with nginx 1.7.5+ because nginx 1.7.5+ changes the API in the events subsystem. thanks Charles R. Portwood II and Mathieu Le Marec for the report. * bugfix: ngx.req.raw_header(): buffer overflow and the "buffer error" exception might happen for massively pipelined downstream requests. thanks Dane Knecht for the report. * bugfix: ngx.req.raw_header(): we might change nginx's internal buffer pointers, which might cause bad side-effects. * doc: added a new section, Cocockets Not Available Everywhere, under the Known Issues section. * upgraded the lua-resty-dns library to 0.14. * feature: added support for the SPF record type specified by RFC 4408. thanks Tom Fitzhenry for the patch. * upgraded the lua-resty-lrucache library to 0.03. * feature: the get() method now also returns the stale value as the second returned value if available. * upgraded the lua-resty-lock library to 0.04. * bugfix: the shared dictionary would incorrectly get unref'd for multiple times when the lock() and/or unlock() methods are called more than once. thanks Peng Wu for the report and Dejiang Zhu for the patch. * upgraded the ngx_echo module to 0.57. * bugfix: $echo_client_request_headers: buffer overflow and the "buffer error" exception might happen for massively pipelined downstream requests. * bugfix: $echo_client_request_headers: we might change nginx's internal buffer pointers, which might cause bad side-effects. * upgraded the ngx_drizzle module to 0.1.8. * bugfix: fixed compilation error with nginx 1.7.5+ because nginx 1.7.5+ changes the API in the events subsystem. * upgraded the ngx_postgres module to 1.0rc5. * bugfix: fixed compilation error with nginx 1.7.5+ because nginx 1.7.5+ changes the API in the events subsystem. * upgraded the ngx_coolkit module to 0.2rc2. * bugfix: compilation failed when PCRE was disabled in the nginx build. * feature: added the "$location" variable, by Piotr Sikora. * upgraded the ngx_set_misc module to 0.27. * bugfix: bugfix: fixed build failure when "--with-mail_ssl_module" is specified while "--with-http_ssl_module" is not. thanks Xiaochen Wang for the report. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1007007 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Enjoy! -agentzh From nginx-forum at nginx.us Mon Dec 8 23:12:29 2014 From: nginx-forum at nginx.us (MasterTH) Date: Mon, 08 Dec 2014 18:12:29 -0500 Subject: Proxy Cache-Setting In-Reply-To: <20141206235832.GD15670@daoine.org> References: <20141206235832.GD15670@daoine.org> Message-ID: <6d22d5efa2e40fbcc484c27ca9e2ab5e.NginxMailingListEnglish@forum.nginx.org> hi, thanks a lot!!! it seems to work, we'll have to test. I'll report back kind regards MasterTH Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255357,255379#msg-255379 From nginx-forum at nginx.us Tue Dec 9 22:06:10 2014 From: nginx-forum at nginx.us (krajeshrao) Date: Tue, 09 Dec 2014 17:06:10 -0500 Subject: Creating CNAME Message-ID: Hi Guys , we are hosting providers , my doubt is when i cname www in godaddy or in AWS its not getting pointed to the site. But when i create a test cname like beta or any other thing its getting pointed for eg. My Website : events.nginx.com this is my website for example in godaddy i'm creating cname like www.news.com =>CNAME=>events.nginx.com when i point like this its not working but beta.news.com =>CNAME=>events.nginx.com when i point , this works Can anyone help me on this issue Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255387,255387#msg-255387 From lists at ruby-forum.com Tue Dec 9 22:17:55 2014 From: lists at ruby-forum.com (Jake Ives) Date: Tue, 09 Dec 2014 23:17:55 +0100 Subject: Nginx & php-fpm = blank pages error Message-ID: <40ba2a131ce5b44cb6fb1a8f844a9ec9@ruby-forum.com> Dear Community, Could really do with some support with this. My site has been working fine for days and I've just restarted the box and now only blank pages are displaying :| See More at: http://serverfault.com/questions/650395/nginx-php-fpm-permission-denied-while-reading-response-header-from-upstream Thanks in advance -- Posted via http://www.ruby-forum.com/. From r1ch+nginx at teamliquid.net Wed Dec 10 01:57:57 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 10 Dec 2014 02:57:57 +0100 Subject: Creating CNAME In-Reply-To: References: Message-ID: You probably have a DNS caching issue, this is not related to nginx. Check your DNS TTL and wait that long before trying again. On Tue, Dec 9, 2014 at 11:06 PM, krajeshrao wrote: > Hi Guys , > > we are hosting providers , my doubt is when i cname www in godaddy or in > AWS > its not getting pointed to the site. But when i create a test cname like > beta or any other thing its getting pointed for eg. > > My Website : events.nginx.com this is my website for example > > in godaddy i'm creating cname like www.news.com =>CNAME=>events.nginx.com > when i point like this its not working but > > beta.news.com =>CNAME=>events.nginx.com when i point , this works > > > Can anyone help me on this issue > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255387,255387#msg-255387 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 10 02:24:33 2014 From: nginx-forum at nginx.us (krajeshrao) Date: Tue, 09 Dec 2014 21:24:33 -0500 Subject: Creating CNAME In-Reply-To: References: Message-ID: <9eef7706d835a9ca9329cc1f9be52a6c.NginxMailingListEnglish@forum.nginx.org> Hi Richard , Thanks for the reply ... This issue is been for 1 month ... But still can't resolve it Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255387,255391#msg-255391 From nginx-forum at nginx.us Wed Dec 10 09:24:09 2014 From: nginx-forum at nginx.us (meir.h@convertmedia.com) Date: Wed, 10 Dec 2014 04:24:09 -0500 Subject: How can I have nginx return 204 when send_timeout is triggered? Message-ID: <439bf73e36a09eb72b4d41796fc2c04a.NginxMailingListEnglish@forum.nginx.org> Hello, How can I have nginx return 204 when send_timeout is triggered? Thanks so much, Meir Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255392,255392#msg-255392 From edigarov at qarea.com Wed Dec 10 10:53:31 2014 From: edigarov at qarea.com (Gregory Edigarov) Date: Wed, 10 Dec 2014 12:53:31 +0200 Subject: Creating CNAME In-Reply-To: <9eef7706d835a9ca9329cc1f9be52a6c.NginxMailingListEnglish@forum.nginx.org> References: <9eef7706d835a9ca9329cc1f9be52a6c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5488262B.4080402@qarea.com> Hi, What are your real hostnames/domains then? On 12/10/2014 04:24 AM, krajeshrao wrote: > Hi Richard , > > Thanks for the reply ... > > This issue is been for 1 month ... But still can't resolve it > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255387,255391#msg-255391 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Dec 10 12:09:57 2014 From: nginx-forum at nginx.us (krajeshrao) Date: Wed, 10 Dec 2014 07:09:57 -0500 Subject: Creating CNAME In-Reply-To: <5488262B.4080402@qarea.com> References: <5488262B.4080402@qarea.com> Message-ID: <0243a9e8c1c3d6c9d51ac5d11d72da81.NginxMailingListEnglish@forum.nginx.org> Hi brooklynwate.org and innoviaweb.com are the two domain name innovled . in this i want to create CNAME for www.brooklynwate.org =>CNAME=>events.innoviaweb.com. when i do this its not working . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255387,255396#msg-255396 From mdounin at mdounin.ru Wed Dec 10 13:10:06 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Dec 2014 16:10:06 +0300 Subject: How can I have nginx return 204 when send_timeout is triggered? In-Reply-To: <439bf73e36a09eb72b4d41796fc2c04a.NginxMailingListEnglish@forum.nginx.org> References: <439bf73e36a09eb72b4d41796fc2c04a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141210131006.GG45960@mdounin.ru> Hello! On Wed, Dec 10, 2014 at 04:24:09AM -0500, meir.h at convertmedia.com wrote: > Hello, > > How can I have nginx return 204 when send_timeout is triggered? After send_timeout it's already too late to return anything, as nginx already tried to send the response to the client. If send_timeout happens, the connection is closed. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 10 13:29:28 2014 From: nginx-forum at nginx.us (meir.h@convertmedia.com) Date: Wed, 10 Dec 2014 08:29:28 -0500 Subject: How can I have nginx return 204 when send_timeout is triggered? In-Reply-To: <20141210131006.GG45960@mdounin.ru> References: <20141210131006.GG45960@mdounin.ru> Message-ID: Thanks so much Maxim, can I send the 204 after a predefined timeout and after a longer time the send_timeout? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255392,255399#msg-255399 From mdounin at mdounin.ru Wed Dec 10 13:47:03 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Dec 2014 16:47:03 +0300 Subject: How can I have nginx return 204 when send_timeout is triggered? In-Reply-To: References: <20141210131006.GG45960@mdounin.ru> Message-ID: <20141210134703.GJ45960@mdounin.ru> Hello! On Wed, Dec 10, 2014 at 08:29:28AM -0500, meir.h at convertmedia.com wrote: > Thanks so much Maxim, can I send the 204 after a predefined timeout and > after a longer time the send_timeout? Sorry, but I failed to understand the question. As previously explained, you can't send anything if send_timeout is triggered - it's already too late, as some response was already send to the client. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 10 13:58:42 2014 From: nginx-forum at nginx.us (meir.h@convertmedia.com) Date: Wed, 10 Dec 2014 08:58:42 -0500 Subject: How can I have nginx return 204 when send_timeout is triggered? In-Reply-To: <20141210134703.GJ45960@mdounin.ru> References: <20141210134703.GJ45960@mdounin.ru> Message-ID: <1b521d72e8aee5783eaf69ac556e1d0e.NginxMailingListEnglish@forum.nginx.org> Hello Maxim, I am so sorry for not being clear. My question is, can I have the nginx submit a 204 to the client after a predefined time? Thanks so much, Meir Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255392,255401#msg-255401 From list_nginx at bluerosetech.com Wed Dec 10 14:07:46 2014 From: list_nginx at bluerosetech.com (Darren Pilgrim) Date: Wed, 10 Dec 2014 06:07:46 -0800 Subject: Creating CNAME In-Reply-To: <0243a9e8c1c3d6c9d51ac5d11d72da81.NginxMailingListEnglish@forum.nginx.org> References: <5488262B.4080402@qarea.com> <0243a9e8c1c3d6c9d51ac5d11d72da81.NginxMailingListEnglish@forum.nginx.org> Message-ID: <548853B2.3020509@bluerosetech.com> On 12/10/2014 4:09 AM, krajeshrao wrote: > brooklynwate.org and innoviaweb.com are the two domain name innovled . in > this i want to create CNAME for www.brooklynwate.org > =>CNAME=>events.innoviaweb.com. when i do this its not working . Dumb question, but are you removing the A record for www.brooklynwate.org when you add the CNAME? From mdounin at mdounin.ru Wed Dec 10 14:13:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Dec 2014 17:13:32 +0300 Subject: How can I have nginx return 204 when send_timeout is triggered? In-Reply-To: <1b521d72e8aee5783eaf69ac556e1d0e.NginxMailingListEnglish@forum.nginx.org> References: <20141210134703.GJ45960@mdounin.ru> <1b521d72e8aee5783eaf69ac556e1d0e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141210141332.GK45960@mdounin.ru> Hello! On Wed, Dec 10, 2014 at 08:58:42AM -0500, meir.h at convertmedia.com wrote: > Hello Maxim, > > I am so sorry for not being clear. > > My question is, can I have the nginx submit a 204 to the client after a > predefined time? Yes, but that's not a trivial task when using vanilla nginx. For example you can do this using the embedded perl module, $r->sleep() command: location / { perl 'sub { my $r = shift; $r->discard_request_body; sub next { my $r = shift; $r->status(204); $r->send_http_header; } $r->sleep(1000, \&next); }'; } See http://nginx.org/r/perl for more details. This can be also done using 3rd party modules. E.g., using the delay module as available from http://mdounin.ru/hg/ngx_http_delay_module/, this can be done as follows: location / { delay 2s; error_page 403 = /empty; deny all; } location = /empty { return 204; } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 10 14:31:28 2014 From: nginx-forum at nginx.us (meir.h@convertmedia.com) Date: Wed, 10 Dec 2014 09:31:28 -0500 Subject: How can I have nginx return 204 when send_timeout is triggered? In-Reply-To: <20141210141332.GK45960@mdounin.ru> References: <20141210141332.GK45960@mdounin.ru> Message-ID: Got it, will try it out. Thanks so much. Meir Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255392,255404#msg-255404 From nginx-forum at nginx.us Wed Dec 10 16:45:26 2014 From: nginx-forum at nginx.us (new299) Date: Wed, 10 Dec 2014 11:45:26 -0500 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused Message-ID: <4dc007b8006f6c8e256d57bb28e40911.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using nginx as a reverse proxy, but I can't get nginx to serve requests from its cache when the upstream server is refusing connections. I understood that "proxy_cache_use_stale error" should allow me to do this, but it doesn't seem to work for me. Have I perhaps misunderstood something? My complete nginx.conf looks like this: user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { upstream localsvr { server localhost:8080; } proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:8m max_size=5000m inactive=300m; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; server { keepalive_timeout 65; types_hash_max_size 2048; proxy_buffering on; default_type application/octet-stream; gzip on; gzip_disable "msie6"; listen 80; proxy_cache one; proxy_cache_min_uses 100; proxy_set_header Host $host; location / { proxy_pass http://localsvr; proxy_cache_use_stale error; proxy_next_upstream error; proxy_redirect off; } } } ----- I've previously also posted this on stackoverflow, but didn't get any feedback there: http://stackoverflow.com/questions/27371285/nginx-proxy-cache-use-stale-not-working-when-connection-refused Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255408,255408#msg-255408 From mdounin at mdounin.ru Wed Dec 10 17:20:16 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 Dec 2014 20:20:16 +0300 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: <4dc007b8006f6c8e256d57bb28e40911.NginxMailingListEnglish@forum.nginx.org> References: <4dc007b8006f6c8e256d57bb28e40911.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141210172016.GN45960@mdounin.ru> Hello! On Wed, Dec 10, 2014 at 11:45:26AM -0500, new299 wrote: > I'm using nginx as a reverse proxy, but I can't get nginx to serve requests > from its cache when the upstream server is refusing connections. I > understood that "proxy_cache_use_stale error" should allow me to do this, > but it doesn't seem to work for me. Have I perhaps misunderstood something? You may want to clariy what "doesn't seem to work for me" means. What makes you think that it doesn't work? -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 11 02:18:53 2014 From: nginx-forum at nginx.us (krajeshrao) Date: Wed, 10 Dec 2014 21:18:53 -0500 Subject: Creating CNAME In-Reply-To: <548853B2.3020509@bluerosetech.com> References: <548853B2.3020509@bluerosetech.com> Message-ID: <7e90c485e577703084978d7f83a2df01.NginxMailingListEnglish@forum.nginx.org> yes as AWS does not add CNAME when i have a A record Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255387,255411#msg-255411 From nginx-forum at nginx.us Thu Dec 11 02:47:21 2014 From: nginx-forum at nginx.us (hyperion) Date: Wed, 10 Dec 2014 21:47:21 -0500 Subject: Content-Type header not proxied to downstream hosts Message-ID: <36bc363163a0c83dea2e017ed7041770.NginxMailingListEnglish@forum.nginx.org> Hi, I'm new to nginx so am probably making a simple mistake but, for the life of me, I can't see what. I want to proxy requests with all headers that the request had to a downstream server if it matches a regex. I also have an issue with the regex, but let's leave that for another post. The nginx conf is very simple. I know the request sent to nginx contains the Content-Type header with value application/json from the access log: Format: $remote_addr - $remote_user [$time_local] "$request" $sent_http_content_type ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"' Output: 127.0.0.1 - - [10/Dec/2014:18:34:49 -0800] "GET /event/3F0046E6B50F75A8/info HTTP/1.1" application/json 200 532 "-" "Apache-HttpClient/4.3.5 (java 1.5)" "-" However, the downstream host doesn't receive it. All other headers, including custom headers, are sent. My nginx config is basic, see this gist: https://gist.github.com/mikquinlan/e68848bb4930725a6fdd I have a workaround by manually adding the header, but I shouldn't have to do it: proxy_set_header Content-Type application/json; I get the same result using nginx 1.4.4 and 1.6.2. nginx -V output is below. Any ideas what I'm doing wrong? All help appreciated. Mik Output of nginx -V ============== nginx version: nginx/1.6.2 built by clang 6.0 (clang-600.0.51) (based on LLVM 3.5svn) TLS SNI support enabled configure arguments: --prefix=/usr/local/Cellar/nginx/1.6.2 --with-http_ssl_module --with-pcre --with-ipv6 --sbin-path=/usr/local/Cellar/nginx/1.6.2/bin/nginx --with-cc-opt='-I/usr/local/Cellar/pcre/8.35/include -I/usr/local/Cellar/openssl/1.0.1i/include' --with-ld-opt='-L/usr/local/Cellar/pcre/8.35/lib -L/usr/local/Cellar/openssl/1.0.1i/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/usr/local/var/run/nginx.pid --lock-path=/usr/local/var/run/nginx.lock --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp --http-proxy-temp-path=/usr/local/var/run/nginx/proxy_temp --http-fastcgi-temp-path=/usr/local/var/run/nginx/fastcgi_temp --http-uwsgi-temp-path=/usr/local/var/run/nginx/uwsgi_temp --http-scgi-temp-path=/usr/local/var/run/nginx/scgi_temp --http-log-path=/usr/local/var/log/nginx/access.log --error-log-path=/usr/local/var/log/nginx/error.log --with-http_gzip_static_module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255412,255412#msg-255412 From nginx-forum at nginx.us Thu Dec 11 02:51:11 2014 From: nginx-forum at nginx.us (hyperion) Date: Wed, 10 Dec 2014 21:51:11 -0500 Subject: Regular expression length syntax not working? Message-ID: <69efbb778163ac6ecde7ddf9ebacea01.NginxMailingListEnglish@forum.nginx.org> Hi This is my second post. :-) I have a regular expression in my location directive to match on a URL. When I use: http { ... location ~ ^/event/[0-9,A-Z]{16}/info$ { proxy_pass http://localhost:7777; } } } I don't get a match. I have to manually repeat the [0-9,A-Z] sixteen times to get a match. Escaping the {} doesn't work either, i.e. /[0-9,A-Z]\{16\}/info How can I use the {} syntax correctly? nginx-V output, below. nginx.conf used here: https://gist.github.com/mikquinlan/e68848bb4930725a6fdd All help appreciated. Thank you. Mik Output of nginx -V ------------------------ nginx version: nginx/1.6.2 built by clang 6.0 (clang-600.0.51) (based on LLVM 3.5svn) TLS SNI support enabled configure arguments: --prefix=/usr/local/Cellar/nginx/1.6.2 --with-http_ssl_module --with-pcre --with-ipv6 --sbin-path=/usr/local/Cellar/nginx/1.6.2/bin/nginx --with-cc-opt='-I/usr/local/Cellar/pcre/8.35/include -I/usr/local/Cellar/openssl/1.0.1i/include' --with-ld-opt='-L/usr/local/Cellar/pcre/8.35/lib -L/usr/local/Cellar/openssl/1.0.1i/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --pid-path=/usr/local/var/run/nginx.pid --lock-path=/usr/local/var/run/nginx.lock --http-client-body-temp-path=/usr/local/var/run/nginx/client_body_temp --http-proxy-temp-path=/usr/local/var/run/nginx/proxy_temp --http-fastcgi-temp-path=/usr/local/var/run/nginx/fastcgi_temp --http-uwsgi-temp-path=/usr/local/var/run/nginx/uwsgi_temp --http-scgi-temp-path=/usr/local/var/run/nginx/scgi_temp --http-log-path=/usr/local/var/log/nginx/access.log --error-log-path=/usr/local/var/log/nginx/error.log --with-http_gzip_static_module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255413,255413#msg-255413 From nginx-forum at nginx.us Thu Dec 11 03:00:25 2014 From: nginx-forum at nginx.us (new299) Date: Wed, 10 Dec 2014 22:00:25 -0500 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: <20141210172016.GN45960@mdounin.ru> References: <20141210172016.GN45960@mdounin.ru> Message-ID: <3280a3c67bd0e1fa317d000cb391cd68.NginxMailingListEnglish@forum.nginx.org> When the upstream goes away nginx gives the error "502 Bad Gateway nginx/1.4.6 (Ubuntu)". The log contains: " [error] 2624#0: *48941 connect() failed (111: Connection refused) while connecting to upstream," Rather than serving it from cache as I would expect. It should be cached as the page was previous returned successfully. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255408,255414#msg-255414 From nginx-forum at nginx.us Thu Dec 11 05:33:24 2014 From: nginx-forum at nginx.us (sudharshanr) Date: Thu, 11 Dec 2014 00:33:24 -0500 Subject: Using the access_log if directive in 1.6.x Message-ID: <989712beda0441cb020901f7fad73e70.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using nginx 1.6.2 on Amazon ec2 linux server. The problem I'm having is that all my 404 errors are going to my access.log. I want them to be redirected to error.log instead. I saw on other forums that with nginx 1.7+, I can use the if directive of access_log to do something like: map $status $errorable { ~([^23][0-9][0-9]) 1; default 0; } access_log /media/ephemeral0/log/nginx/error.log combined if=$errorable; However, I'm not able to do the same with 1.6.2. I don't think I can update to 1.7+ as it is not available for ec2 linux servers yet. Is there an alternative for doing the same with 1.6.2? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255415,255415#msg-255415 From jan.reges at siteone.cz Thu Dec 11 06:39:03 2014 From: jan.reges at siteone.cz (=?iso-8859-2?Q?Jan_Rege=B9?=) Date: Thu, 11 Dec 2014 06:39:03 +0000 Subject: How to setup Nginx as REALLY static-cache reverse proxy Message-ID: Nobody to help? :-( Regards, Jan Reges -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Dec 11 08:12:25 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Dec 2014 08:12:25 +0000 Subject: Regular expression length syntax not working? In-Reply-To: <69efbb778163ac6ecde7ddf9ebacea01.NginxMailingListEnglish@forum.nginx.org> References: <69efbb778163ac6ecde7ddf9ebacea01.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141211081225.GG15670@daoine.org> On Wed, Dec 10, 2014 at 09:51:11PM -0500, hyperion wrote: Hi there, > location ~ ^/event/[0-9,A-Z]{16}/info$ { > proxy_pass http://localhost:7777; > } > I don't get a match. [root at monolith1 nginx]# sbin/nginx -t nginx: [emerg] unknown directive "16}/info$" in /usr/local/nginx/conf/nginx.conf:32 nginx: configuration file /usr/local/nginx/conf/nginx.conf test failed I suspect that you also don't get the config file used at all. > I have to manually repeat the [0-9,A-Z] sixteen times to get a match. > Escaping the {} doesn't work either, i.e. /[0-9,A-Z]\{16\}/info > > How can I use the {} syntax correctly? This page seems to describe the nginx regular expression syntax http://nginx.org/en/docs/http/server_names.html#regex_names f -- Francis Daly francis at daoine.org From ru at nginx.com Thu Dec 11 08:13:21 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Thu, 11 Dec 2014 11:13:21 +0300 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: <4dc007b8006f6c8e256d57bb28e40911.NginxMailingListEnglish@forum.nginx.org> References: <4dc007b8006f6c8e256d57bb28e40911.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141211081321.GB88078@lo0.su> On Wed, Dec 10, 2014 at 11:45:26AM -0500, new299 wrote: > Hi, > > I'm using nginx as a reverse proxy, but I can't get nginx to serve requests > from its cache when the upstream server is refusing connections. I > understood that "proxy_cache_use_stale error" should allow me to do this, > but it doesn't seem to work for me. Have I perhaps misunderstood something? > > My complete nginx.conf looks like this: > > user www-data; > worker_processes 4; > pid /run/nginx.pid; > > events { > worker_connections 768; > # multi_accept on; > } > > http { > upstream localsvr { > server localhost:8080; > } > > proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:8m > max_size=5000m inactive=300m; > access_log /var/log/nginx/access.log; > error_log /var/log/nginx/error.log; > server { > keepalive_timeout 65; > types_hash_max_size 2048; > proxy_buffering on; > > default_type application/octet-stream; > > gzip on; > gzip_disable "msie6"; > listen 80; > proxy_cache one; > proxy_cache_min_uses 100; > proxy_set_header Host $host; > location / { > proxy_pass http://localsvr; > proxy_cache_use_stale error; > proxy_next_upstream error; > proxy_redirect off; > } > } > } Your config doesn't have any http://nginx.org/r/proxy_cache_valid directives. If this is intentional, then your responses should carry caching information themselves (X-Accel-Expires, Expires, Cache-Control, Set-Cookie, Vary, see the link above for details) and otherwise qualify to be cached. Also, "proxy_cache_min_uses 100" in your config instructs to cache a response only after it was requested 100 times. Please first make sure your responses actually get cached by looking into /data/nginx/cache. From francis at daoine.org Thu Dec 11 08:20:29 2014 From: francis at daoine.org (Francis Daly) Date: Thu, 11 Dec 2014 08:20:29 +0000 Subject: Content-Type header not proxied to downstream hosts In-Reply-To: <36bc363163a0c83dea2e017ed7041770.NginxMailingListEnglish@forum.nginx.org> References: <36bc363163a0c83dea2e017ed7041770.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141211082029.GH15670@daoine.org> On Wed, Dec 10, 2014 at 09:47:21PM -0500, hyperion wrote: Hi there, > I want to proxy requests with all headers that the request had to a > downstream server if it matches a regex. I suspect there may be a terminology confusion here; but from this mail, I am not sure what problem you are reporting. The client makes a request of nginx; nginx makes a request of upstream; upstream sends a response to nginx; nginx sends a response to the client. Two requests, two responses. Which of those four sets of headers are you concerned about? > The nginx conf is very simple. I know the request sent to nginx contains > the Content-Type header with value application/json from the access log: I don't see that header value in this access_log. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Dec 11 08:54:25 2014 From: nginx-forum at nginx.us (Cord Beermann) Date: Thu, 11 Dec 2014 03:54:25 -0500 Subject: sending 404 responses for epty objects. Message-ID: <7fd464848269ecf32db5f1c0e5117c10.NginxMailingListEnglish@forum.nginx.org> Hello, Due to issues with a backend beyond my influence i need to fix this with Nginx. Root-Cause: A CMS generates empty files on a filesystem which will be later filled with content. However: those files are there for some time with 0 bytes and will be served with 200 through a chain of a caching Nginx and a caching CDN. User --> CDN --> Caching Nginx (SlowFS) --> Serving Nginx --> Filesystem. The Backend-Filesystem is served by a Nginx. Solution could be to serve a 404 for all empty files. I tried with $sent_http_content_length, but it seems to be empty in the location where i would need it. Is it possible to use some kind of 'test -s' on the file to decide when to send a 404? I also tried to tackle it in the caching SlowFS-Nginx, by using $upstream_http_content_length in if or map-statements [1]. I can see $upstream_http_content_length set in an X-Debug-Header i added, but can't get it to work to use it to act on it. if ($upstream_http_content_length = 0) { return 404; } I found the discussion about 'using $upstream* variables inside map directive'[2] but even if i would get this to work it wouldn't be enough as i need to signal the CDN in front (with some headers) that it also should not cache the empty response. Best would be to generate a 404 if the upstream-object is empty. So maybe someone here has an idea how to tackle this. Cord [1] like in http://syshero.org/post/49594172838/avoid-caching-0-byte-files-on-nginx [2] http://forum.nginx.org/read.php?2,249880,249899#msg-249899 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255421,255421#msg-255421 From nginx-forum at nginx.us Thu Dec 11 09:40:50 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 11 Dec 2014 04:40:50 -0500 Subject: sending 404 responses for epty objects. In-Reply-To: <7fd464848269ecf32db5f1c0e5117c10.NginxMailingListEnglish@forum.nginx.org> References: <7fd464848269ecf32db5f1c0e5117c10.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8e12b913513c7cae534f8c9b7d03ff94.NginxMailingListEnglish@forum.nginx.org> On the 'Serving Nginx' I'd use Lua to test for a zero byte file and return the 404 there. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255421,255422#msg-255422 From nginx-forum at nginx.us Thu Dec 11 10:19:44 2014 From: nginx-forum at nginx.us (meir.h@convertmedia.com) Date: Thu, 11 Dec 2014 05:19:44 -0500 Subject: nginx to send 302 after a timeout Message-ID: Hello everyone, Can I have an Nginx initiate a 302, redirect, after a specific time? We use Nginx for load balancing and we would like to redirect requests that take more than 50ms (for example). Can it be done? Thanks so much, Meir Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255424,255424#msg-255424 From nginx-forum at nginx.us Thu Dec 11 11:32:52 2014 From: nginx-forum at nginx.us (anoopov) Date: Thu, 11 Dec 2014 06:32:52 -0500 Subject: nginx cache expire settings issue.Can anyone help? Message-ID: Hi I am new to Nginx. I need to add expire -1 for my JSON files in the below urls https://siteaddress/foldername /default.htm#/dashboard/ui.json location /foldername { index default.html default.htm; proxy_pass http://siteaddress_eapp_entry; } I have tried below syntax but still JSON files are caching location \foldername \.(json)$ { expires -1; } Please help anyone to solve my issue,Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255425,255425#msg-255425 From edigarov at qarea.com Thu Dec 11 11:59:52 2014 From: edigarov at qarea.com (Gregory Edigarov) Date: Thu, 11 Dec 2014 13:59:52 +0200 Subject: nginx cache expire settings issue.Can anyone help? In-Reply-To: References: Message-ID: <54898738.7090307@qarea.com> you do not caching anything with proxy_pass alone. you should use proxy_cache in conjunction. On 12/11/2014 01:32 PM, anoopov wrote: > Hi I am new to Nginx. I need to add expire -1 for my JSON files in the below > urls > > https://siteaddress/foldername /default.htm#/dashboard/ui.json > > > location /foldername { > index default.html default.htm; > proxy_pass http://siteaddress_eapp_entry; > } > > > I have tried below syntax but still JSON files are caching > > location \foldername \.(json)$ { > expires -1; > } > > Please help anyone to solve my issue,Thanks in advance. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255425,255425#msg-255425 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vbart at nginx.com Thu Dec 11 13:17:14 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 11 Dec 2014 16:17:14 +0300 Subject: Using the access_log if directive in 1.6.x In-Reply-To: <989712beda0441cb020901f7fad73e70.NginxMailingListEnglish@forum.nginx.org> References: <989712beda0441cb020901f7fad73e70.NginxMailingListEnglish@forum.nginx.org> Message-ID: <10280385.LorieJsGJx@vbart-workstation> On Thursday 11 December 2014 00:33:24 sudharshanr wrote: > Hi, > > I'm using nginx 1.6.2 on Amazon ec2 linux server. The problem I'm having is > that all my 404 errors are going to my access.log. I want them to be > redirected to error.log instead. > > I saw on other forums that with nginx 1.7+, I can use the if directive of > access_log to do something like: > > map $status $errorable { > ~([^23][0-9][0-9]) 1; > default 0; > } > > access_log /media/ephemeral0/log/nginx/error.log combined if=$errorable; > > However, I'm not able to do the same with 1.6.2. I don't think I can update > to 1.7+ as it is not available for ec2 linux servers yet. Is there an > alternative for doing the same with 1.6.2? > > Thanks. > You can specify separate location for the 404 error page: error_page 404 /404.html; location /404.html { access_log /media/ephemeral0/log/nginx/error.log combined; } Or use our official AMIs: https://aws.amazon.com/marketplace/pp/B00A04GAG4 wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu Dec 11 13:57:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Dec 2014 16:57:37 +0300 Subject: nginx cache expire settings issue.Can anyone help? In-Reply-To: References: Message-ID: <20141211135737.GS45960@mdounin.ru> Hello! On Thu, Dec 11, 2014 at 06:32:52AM -0500, anoopov wrote: > Hi I am new to Nginx. I need to add expire -1 for my JSON files in the below > urls > > https://siteaddress/foldername /default.htm#/dashboard/ui.json > > > location /foldername { > index default.html default.htm; > proxy_pass http://siteaddress_eapp_entry; > } In the URL provided "#/dashboard/ui.json" is a fragment, and will not be sent to the server. > I have tried below syntax but still JSON files are caching > > location \foldername \.(json)$ { > expires -1; > } This is syntactically incorrect and will cause syntax error due to space in it. If the "#" above is just a typo, then you can use something like this to disable caching of *.json files within "/foldername": location /foldername { proxy_pass ... location ~ \.json$ { expires epoch; proxy_pass ... } } Note that: - the "~" is important as it marks regex location, see http://nginx.org/r/location for details; - proxy_pass have to be repeated in the nested location. More about locations can be found in the documentation, see http://nginx.org/r/location. -- Maxim Dounin http://nginx.org/ From kibergus at gmail.com Thu Dec 11 13:58:28 2014 From: kibergus at gmail.com (KiberGus) Date: Thu, 11 Dec 2014 17:58:28 +0400 Subject: Nginx lua module + SPDY = no request body Message-ID: Hello. I'm experiencing problems with combination of nginx lua module and SPDI. I use "access_by_lua_file" directive to validate requests. In the script request checksumm is validated, so I need to get body of the POST requests. "ngx.var.request_body" variable is used for this purpose. This code works when client uses HTTP protocol, but when we move to SPDY request body is always nil. What I've tried: I do call ngx.req.read_body() from lua script. Beside taht I've added "lua_need_request_body on" directive to the nginx config (at location and at server levels). I've tried using "ngx.req.get_body_data()" instead of "ngx.var.request_body". I've checked that "ngx.req.get_body_file" returns nil. So request is not written to file. I've used wireshark to check, that request body is not empty. If "access_by_lua_file" directive is removed, then backend receives request body. So the problem occurs only within lua. We've tested nginx=1.4.0 and nginx=1.6.2. So in case of 1.4 we use SPDY v2 and in case SPDY v3.1. Here is a test config: server { listen [::]:80; listen [::]:6121 spdy; spdy_headers_comp 9; client_body_buffer_size 100k; client_max_body_size 100k; lua_need_request_body on; server_name *; location /test { access_by_lua ' ngx.req.read_body() if not ngx.req.get_body_data() then return ngx.exit(403) end '; content_by_lua ' ngx.header["Content-Type"] = "text/plain" ngx.say("hello world") '; } } If you request it without body you get 403. Otherwice you get 200: $ curl --data "TEST" http://localhost/test hello world $ curl http://localhost/test 403 Forbidden

403 Forbidden


nginx/1.4.0
But if SPDY protocol is used, than it always returns 403. Any help is appreciated. Thank you. From mdounin at mdounin.ru Thu Dec 11 14:30:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Dec 2014 17:30:11 +0300 Subject: sending 404 responses for epty objects. In-Reply-To: <7fd464848269ecf32db5f1c0e5117c10.NginxMailingListEnglish@forum.nginx.org> References: <7fd464848269ecf32db5f1c0e5117c10.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141211143011.GU45960@mdounin.ru> Hello! On Thu, Dec 11, 2014 at 03:54:25AM -0500, Cord Beermann wrote: > Hello, > > Due to issues with a backend beyond my influence i need to fix this with > Nginx. > > Root-Cause: A CMS generates empty files on a filesystem which will be later > filled with content. However: those files are there for some time with 0 > bytes > and will be served with 200 through a chain of a caching Nginx and a caching > CDN. > > User --> CDN --> Caching Nginx (SlowFS) --> Serving Nginx --> Filesystem. > > The Backend-Filesystem is served by a Nginx. Solution could be to serve a > 404 > for all empty files. Please note that this is _not_ a solution, as at some point files will be partially filled with content, and testing that the size isn't 0 won't help. Rather, it's a workaround which hides the problem in some cases. The only _solution_ I see is to fix backend to update files atomically - e.g., write to a temporaty file, and then rename() it to a real name. > I tried with $sent_http_content_length, but it seems to > be empty in the location where i would need it. Is it possible to use some > kind of 'test -s' on the file to decide when to send a 404? This is something possible with embedded perl (and lua, as already suggested). > I also tried to tackle it in the caching SlowFS-Nginx, by using > $upstream_http_content_length in if or map-statements [1]. I can see > $upstream_http_content_length set in an X-Debug-Header i added, but can't > get > it to work to use it to act on it. > > if ($upstream_http_content_length = 0) { > return 404; > } This is not going to work as "if" will be executed before the request is sent to the upstream server. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Dec 11 14:41:47 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Dec 2014 17:41:47 +0300 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: <3280a3c67bd0e1fa317d000cb391cd68.NginxMailingListEnglish@forum.nginx.org> References: <20141210172016.GN45960@mdounin.ru> <3280a3c67bd0e1fa317d000cb391cd68.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141211144147.GV45960@mdounin.ru> Hello! On Wed, Dec 10, 2014 at 10:00:25PM -0500, new299 wrote: > When the upstream goes away nginx gives the error "502 Bad Gateway > nginx/1.4.6 (Ubuntu)". The log contains: > > " [error] 2624#0: *48941 connect() failed (111: Connection refused) while > connecting to upstream," > > Rather than serving it from cache as I would expect. It should be cached as > the page was previous returned successfully. The problem is in the "it should be cached" statement. There are lots of cases when pages will not be cached even if returned - for example, because caching is explicitly disabled by response headers, or just not enabled and/or disabled with proxy_cache_min_uses, see Ruslan's answer. So in your case the response is likely _not_ cached, and that's why "proxy_cache_use_stale" doesn't work for you. To be sure you can follow Ruslan's suggestion to look into the cache directory, or use the $upstream_cache_status variable, see http://nginx.org/r/$upstream_cache_status. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 11 15:14:20 2014 From: nginx-forum at nginx.us (new299) Date: Thu, 11 Dec 2014 10:14:20 -0500 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: <20141211081321.GB88078@lo0.su> References: <20141211081321.GB88078@lo0.su> Message-ID: Thanks, I've tried this. My amended configuration is below. However, I'm still getting the same error when the upstream goes away. The cache directory is now being populated correctly however. Any ideas? ---------- user www-data; worker_processes 4; pid /run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { upstream localsvr { server localhost:8080; } proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=one:8m max_size=5000m inactive=300m; access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; server { keepalive_timeout 65; types_hash_max_size 2048; proxy_buffering on; default_type application/octet-stream; gzip on; gzip_disable "msie6"; listen 80; proxy_cache one; proxy_cache_min_uses 1; proxy_cache_valid 200 302 10m; proxy_cache_valid any 10m; proxy_set_header Host $host; location / { proxy_pass http://localsvr; proxy_cache_use_stale error; proxy_next_upstream error; proxy_redirect off; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255408,255440#msg-255440 From nginx-forum at nginx.us Thu Dec 11 15:25:39 2014 From: nginx-forum at nginx.us (new299) Date: Thu, 11 Dec 2014 10:25:39 -0500 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: References: <20141211081321.GB88078@lo0.su> Message-ID: After adding the following: proxy_cache_valid 200 302 301 10m; It appears to be working. It's unclear to me why: proxy_cache_valid any 10m; Wasn't working, for me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255408,255441#msg-255441 From nginx-forum at nginx.us Thu Dec 11 17:09:21 2014 From: nginx-forum at nginx.us (hyperion) Date: Thu, 11 Dec 2014 12:09:21 -0500 Subject: Regular expression length syntax not working? In-Reply-To: <20141211081225.GG15670@daoine.org> References: <20141211081225.GG15670@daoine.org> Message-ID: <204ea4238fc8fe815659a6ec2dc26b7b.NginxMailingListEnglish@forum.nginx.org> HI Francis The link to the doc was exactly what I was looking for. The regex works as expected now. Thanks Mik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255413,255442#msg-255442 From nginx-forum at nginx.us Thu Dec 11 17:13:07 2014 From: nginx-forum at nginx.us (_vigneshh) Date: Thu, 11 Dec 2014 12:13:07 -0500 Subject: SPDY Server Push Support Message-ID: <854cc62584e474a1663e52bf898f37c2.NginxMailingListEnglish@forum.nginx.org> Any ETA on SPDY Server Push support on nginx?. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255443,255443#msg-255443 From nginx-forum at nginx.us Thu Dec 11 17:57:54 2014 From: nginx-forum at nginx.us (hyperion) Date: Thu, 11 Dec 2014 12:57:54 -0500 Subject: Content-Type header not proxied to downstream hosts In-Reply-To: <20141211082029.GH15670@daoine.org> References: <20141211082029.GH15670@daoine.org> Message-ID: <23c602fa95c46499c424745870171d2c.NginxMailingListEnglish@forum.nginx.org> Hi Francis Thanks again for your response. Using the info you provided I was able to debug and fix the issue. It's actually an issue in the client library I'm using to make the request to Nginx. Mik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255412,255444#msg-255444 From nginx-forum at nginx.us Thu Dec 11 18:25:45 2014 From: nginx-forum at nginx.us (sudharshanr) Date: Thu, 11 Dec 2014 13:25:45 -0500 Subject: Using the access_log if directive in 1.6.x In-Reply-To: <10280385.LorieJsGJx@vbart-workstation> References: <10280385.LorieJsGJx@vbart-workstation> Message-ID: Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Thursday 11 December 2014 00:33:24 sudharshanr wrote: > > Hi, > > > > I'm using nginx 1.6.2 on Amazon ec2 linux server. The problem I'm > having is > > that all my 404 errors are going to my access.log. I want them to be > > redirected to error.log instead. > > > > I saw on other forums that with nginx 1.7+, I can use the if > directive of > > access_log to do something like: > > > > map $status $errorable { > > ~([^23][0-9][0-9]) 1; > > default 0; > > } > > > > access_log /media/ephemeral0/log/nginx/error.log combined > if=$errorable; > > > > However, I'm not able to do the same with 1.6.2. I don't think I can > update > > to 1.7+ as it is not available for ec2 linux servers yet. Is there > an > > alternative for doing the same with 1.6.2? > > > > Thanks. > > > > You can specify separate location for the 404 error page: > > error_page 404 /404.html; > > location /404.html { > access_log /media/ephemeral0/log/nginx/error.log combined; > } > > Or use our official AMIs: > https://aws.amazon.com/marketplace/pp/B00A04GAG4 > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hello Valentin, Thank you for your reply. Just one question. It is not just the 404 errors that I want to redirect. I want to redirect all 4xx and 5xx errors. I have updated my config file as below, but it doesn't seem to work. The 404 errors still go to access.log server { ... ... root /wdrive/www; access_log /mnt/log/nginx/access.log ; error_log /mnt/log/nginx/error.log; error_page 400 401 402 403 404 /error4x.html; error_page 500 501 502 503 /error5x.html; location /error4x.html{ access_log /mnt/log/nginx/error.log; } location /error5x.html{ access_log /mnt/log/nginx/error.log; } location / { .... } } I have created error4x.html page in /wdrive/www. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255415,255445#msg-255445 From vbart at nginx.com Thu Dec 11 18:39:41 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 11 Dec 2014 21:39:41 +0300 Subject: Using the access_log if directive in 1.6.x In-Reply-To: References: <10280385.LorieJsGJx@vbart-workstation> Message-ID: <2208611.Y25K9dU5Hf@vbart-workstation> On Thursday 11 December 2014 13:25:45 sudharshanr wrote: [..] > > Hello Valentin, > > Thank you for your reply. Just one question. It is not just the 404 errors > that I want to redirect. I want to redirect all 4xx and 5xx errors. I have > updated my config file as below, but it doesn't seem to work. The 404 errors > still go to access.log > > server { > ... > ... > root /wdrive/www; > > access_log /mnt/log/nginx/access.log ; > error_log /mnt/log/nginx/error.log; > > error_page 400 401 402 403 404 /error4x.html; > error_page 500 501 502 503 /error5x.html; > > location /error4x.html{ > access_log /mnt/log/nginx/error.log; > } > > location /error5x.html{ > access_log /mnt/log/nginx/error.log; > } > > location / { > .... > } > } > > I have created error4x.html page in /wdrive/www. > > Thanks. > By default, the errors generated by nginx itself are only handled by the error_page directive. If you want to intercept errors from upstream as well then you need to turn on proxy_intercept_errors. See the docs: http://nginx.org/r/proxy_intercept_errors It's always a good idea to provide your full configuration with a question. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Dec 11 19:03:35 2014 From: nginx-forum at nginx.us (sandeepkolla99) Date: Thu, 11 Dec 2014 14:03:35 -0500 Subject: Validating client certificate against CRL Message-ID: <0c6e7bbabe50cfe5dbb92ceb22b8de1f.NginxMailingListEnglish@forum.nginx.org> Hi, My Nginx is setup for Mutual SSL and it works well for the below nginx configuration. Hierarchy of certificates is RootCA ******************************** | ******************************** V ************************** IntermediateCA ******************************** | ******************************** V ***********************ClientCert ServerCert listen 80; listen 443 ssl; server_name localhost; ssl_certificate serverCert.pem; ssl_certificate_key serverKey.key; ssl_client_certificate RootCA.pem; ssl_verify_client on; ssl_verify_depth 2; But If I add 'ssl_crl RootCACRL.pem' or 'ssl_crl IntermediateCRL.pem' to above configuration, I see the below error. By the way, RootCACRL.pem and IntermediateCRL.pem files doesn't have any revoked certificates. 400 Bad Request The SSL certificate error nginx/1.6.2 Can you please help me in this. Regards, Sandeep Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255448,255448#msg-255448 From mdounin at mdounin.ru Thu Dec 11 19:33:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 Dec 2014 22:33:18 +0300 Subject: Validating client certificate against CRL In-Reply-To: <0c6e7bbabe50cfe5dbb92ceb22b8de1f.NginxMailingListEnglish@forum.nginx.org> References: <0c6e7bbabe50cfe5dbb92ceb22b8de1f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141211193318.GY45960@mdounin.ru> Hello! On Thu, Dec 11, 2014 at 02:03:35PM -0500, sandeepkolla99 wrote: > Hi, > My Nginx is setup for Mutual SSL and it works well for the below nginx > configuration. > Hierarchy of certificates is RootCA > ******************************** | > ******************************** V > ************************** IntermediateCA > ******************************** | > ******************************** V > ***********************ClientCert ServerCert > > listen 80; > listen 443 ssl; > server_name localhost; > > ssl_certificate serverCert.pem; > ssl_certificate_key serverKey.key; > ssl_client_certificate RootCA.pem; > ssl_verify_client on; > ssl_verify_depth 2; > > But If I add 'ssl_crl RootCACRL.pem' or 'ssl_crl IntermediateCRL.pem' to > above configuration, I see the below error. By the way, RootCACRL.pem and > IntermediateCRL.pem files doesn't have any revoked certificates. > > 400 Bad Request > > The SSL certificate error > > nginx/1.6.2 The "ssl_crl" should contain CRLs for all certificates in the chain, that is, both RootCA and IntermediateCA in your case. There should be a message in the error log (at "info" level) explaining what's wrong. Just combining IntermediateCRL.pem and RootCACRL.pem into a single file and using it in the "ssl_crl" directive should fix this. -- Maxim Dounin http://nginx.org/ From rpaprocki at fearnothingproductions.net Fri Dec 12 05:42:10 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Thu, 11 Dec 2014 21:42:10 -0800 Subject: [nginx] hello world module build trouble Message-ID: <548A8032.5040404@fearnothingproductions.net> Hello, I am trying to build a simple nginx module to learn more about nginx's internals. I have copied several hello world examples into my own module: http://pastebin.com/esHFtaMw And the config file: http://pastebin.com/t1fpEPe6 I've downloaded nginx 1.7.8 onto a vanilla Ubuntu 14.04 install. I run configure with the following: root at dev:/usr/local/src/nginx-1.7.8# ./configure --with-debug --add-module=/usr/local/src/ngx_hello_dolly I see that configure adds it in properly: [... snip ...] adding module in /usr/local/src/ngx_hello_dolly + ngx_http_hello_dolly was configured [... snip ...] But when I run make I receive the following error: objs/addon/ngx_hello_dolly/ngx_http_hello_dolly.o \ objs/ngx_modules.o \ -lpthread -lcrypt -lpcre -lcrypto -lcrypto -lz objs/ngx_modules.o:(.data+0x110): undefined reference to `ngx_http_hello_dolly' collect2: error: ld returned 1 exit status make[1]: *** [objs/nginx] Error 1 make[1]: Leaving directory `/usr/local/src/nginx-1.7.8' make: *** [build] Error 2 Fule make output is at: http://pastebin.com/DD42e4N9 Can anyone point me in the direction of what I'm doing wrong? I don't understand why the build process errors out with undefined reference to `ngx_http_hello_dolly'. Have I mistyped something in my module? I cannot see any discrepancy between this and something such as (http://blog.zhuzhaoyuan.com/2009/08/creating-a-hello-world-nginx-module/). Much appreciated if anyone can point me in the right direction! From nginx-forum at nginx.us Fri Dec 12 07:12:39 2014 From: nginx-forum at nginx.us (cubicdaiya) Date: Fri, 12 Dec 2014 02:12:39 -0500 Subject: [nginx] hello world module build trouble In-Reply-To: <548A8032.5040404@fearnothingproductions.net> References: <548A8032.5040404@fearnothingproductions.net> Message-ID: <771c7b0414e26b8ab0c1fd6f18a92aff.NginxMailingListEnglish@forum.nginx.org> Hello. Why don't you apply a difference below? --- config.orig 2014-12-12 16:10:06.000000000 +0900 +++ config 2014-12-12 16:06:19.000000000 +0900 @@ -1,3 +1,3 @@ -ngx_addon_name=ngx_http_hello_dolly -HTTP_MODULES="$HTTP_MODULES ngx_http_hello_dolly" +ngx_addon_name=ngx_http_hello_dolly_module +HTTP_MODULES="$HTTP_MODULES ngx_http_hello_dolly_module" NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_hello_dolly.c" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255455,255456#msg-255456 From rpaprocki at fearnothingproductions.net Fri Dec 12 18:51:38 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 12 Dec 2014 10:51:38 -0800 Subject: [nginx] hello world module build trouble In-Reply-To: <771c7b0414e26b8ab0c1fd6f18a92aff.NginxMailingListEnglish@forum.nginx.org> References: <548A8032.5040404@fearnothingproductions.net> <771c7b0414e26b8ab0c1fd6f18a92aff.NginxMailingListEnglish@forum.nginx.org> Message-ID: <548B393A.7080906@fearnothingproductions.net> Yep, I didn't realize the _module suffix was required. It built successfully, thank you! On 12/11/2014 11:12 PM, cubicdaiya wrote: > Hello. > > Why don't you apply a difference below? > > --- config.orig 2014-12-12 16:10:06.000000000 +0900 > +++ config 2014-12-12 16:06:19.000000000 +0900 > @@ -1,3 +1,3 @@ > -ngx_addon_name=ngx_http_hello_dolly > -HTTP_MODULES="$HTTP_MODULES ngx_http_hello_dolly" > +ngx_addon_name=ngx_http_hello_dolly_module > +HTTP_MODULES="$HTTP_MODULES ngx_http_hello_dolly_module" > NGX_ADDON_SRCS="$NGX_ADDON_SRCS $ngx_addon_dir/ngx_http_hello_dolly.c" > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255455,255456#msg-255456 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From nginx-forum at nginx.us Fri Dec 12 19:17:54 2014 From: nginx-forum at nginx.us (khav) Date: Fri, 12 Dec 2014 14:17:54 -0500 Subject: nginx + php-fpm = file not found Message-ID: <7105901c4f5ee2adc2622f592e29bab5.NginxMailingListEnglish@forum.nginx.org> I am getting File not found error with nginx and i have been trying to fix this for hours.The config look similar to what i use on other sites but i don't know why it doesn't work.html files works fine thought.index.php location is /home/servergreek.com/public_html/www/index.php.Thanks for helping me out.My Nginx version : 1.7.8 & php version is PHP 5.5.20 (cli) (built: Dec 10 2014 14:03:09) server { listen 80; server_name servergreek.com 167.88.125.157; return 301 http://www.servergreek.com$request_uri; } server { listen 80 default_server; server_name www.servergreek.com; access_log /home/servergreek.com/public_html/logs/access_log main; error_log /home/servergreek.com/public_html/logs/error_log crit; root /home/servergreek.com/public_html/www; index index.php index.html index.htm; #Serve static content directly location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|woff)$ { access_log off; expires max; } location ~ ^/tmp/(.*)$ { deny all; } # Zend Opcache rules #location /opcache/ { # root /home/servergreek.com/public_html/www; # index index.php index.html index.htm; # auth_basic "Restricted Area (Secured by Khavish)"; # auth_basic_user_file /var/www/servergreek.com/private/htpasswd; #} # Only requests to our Host are allowed if ($host !~ ^(servergreek.com|www.servergreek.com)$ ) { return 444; } location ~* \.php$ { root /home/servergreek.com/public_html/www; fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_connect_timeout 60; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; } #location ~ /\.ht { # deny all; #} } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255466,255466#msg-255466 From jaderhs5 at gmail.com Fri Dec 12 21:07:44 2014 From: jaderhs5 at gmail.com (Jader H. Silva) Date: Fri, 12 Dec 2014 19:07:44 -0200 Subject: nginx + php-fpm = file not found In-Reply-To: <7105901c4f5ee2adc2622f592e29bab5.NginxMailingListEnglish@forum.nginx.org> References: <7105901c4f5ee2adc2622f592e29bab5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Have you checked your php-fpm settings and log file? I'm getting a "X-Powered-By: PHP..." header which indicates nginx is corrrectly sending the request to php-fpm. Also I believe you don't really need to set root again inside the "location" blocks since it's already set in it's "server" parent block. 2014-12-12 17:17 GMT-02:00 khav : > > I am getting File not found error with nginx and i have been trying to fix > this for hours.The config look similar to what i use on other sites but i > don't know why it doesn't work.html files works fine thought.index.php > location is /home/servergreek.com/public_html/www/index.php.Thanks for > helping me out.My Nginx version : 1.7.8 & php version is PHP 5.5.20 (cli) > (built: Dec 10 2014 14:03:09) > > server { > listen 80; > server_name servergreek.com 167.88.125.157; > return 301 http://www.servergreek.com$request_uri; > } > server { > listen 80 default_server; > server_name www.servergreek.com; > access_log /home/servergreek.com/public_html/logs/access_log main; > error_log /home/servergreek.com/public_html/logs/error_log crit; > root /home/servergreek.com/public_html/www; > index index.php index.html index.htm; > > #Serve static content directly > location ~* \.(jpg|jpeg|gif|css|png|js|ico|html|woff)$ { > access_log off; > expires max; > } > > location ~ ^/tmp/(.*)$ { > deny all; > } > > > > # Zend Opcache rules > #location /opcache/ { > # root /home/servergreek.com/public_html/www; > # index index.php index.html index.htm; > # auth_basic "Restricted Area (Secured by Khavish)"; > # auth_basic_user_file > /var/www/servergreek.com/private/htpasswd; > #} > > # Only requests to our Host are allowed > if ($host !~ ^(servergreek.com|www.servergreek.com)$ ) { > return 444; > } > > location ~* \.php$ { > root /home/servergreek.com/public_html/www; > fastcgi_pass unix:/tmp/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_connect_timeout 60; > fastcgi_send_timeout 300; > fastcgi_read_timeout 300; > fastcgi_buffer_size 128k; > fastcgi_buffers 256 16k; > fastcgi_busy_buffers_size 256k; > > } > > #location ~ /\.ht { > # deny all; > #} > > > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255466,255466#msg-255466 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- att. Jader H. Silva -------------- next part -------------- An HTML attachment was scrubbed... URL: From neubyr at gmail.com Sat Dec 13 00:00:29 2014 From: neubyr at gmail.com (neubyr) Date: Fri, 12 Dec 2014 16:00:29 -0800 Subject: stop automatic trailing slash addition Message-ID: I was wondering if it's possible to have separate namespaces for '/test' and /test/'. For example: location /test { root /usr/share/nginx/test; } location /test/ { root /usr/share/nginx/test-slash; try_files $uri default.txt; } I tried above configuration, but nginx adds trailing slash to all '/test' requests before they get processed by location directives. So all requests are going to '/test/' location. Is there any way to get separate namespace for /test and /test/ ? This not for real world website. I am trying to learn more about location directive. - N -------------- next part -------------- An HTML attachment was scrubbed... URL: From me at myconan.net Sat Dec 13 02:03:07 2014 From: me at myconan.net (Edho Arief) Date: Sat, 13 Dec 2014 11:03:07 +0900 Subject: stop automatic trailing slash addition In-Reply-To: References: Message-ID: On Sat, Dec 13, 2014 at 9:00 AM, neubyr wrote: > > I was wondering if it's possible to have separate namespaces for '/test' and > /test/'. For example: > > > location /test { > root /usr/share/nginx/test; > } > > location /test/ { > root /usr/share/nginx/test-slash; > try_files $uri default.txt; > } > > I tried above configuration, but nginx adds trailing slash to all '/test' > requests before they get processed by location directives. So all requests > are going to '/test/' location. > > Is there any way to get separate namespace for /test and /test/ ? This not > for real world website. I am trying to learn more about location directive. > try reloading your nginx server. Either that or you have directory /usr/share/nginx/test/test. (or browser's cache) From francis at daoine.org Sat Dec 13 08:00:41 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 13 Dec 2014 08:00:41 +0000 Subject: stop automatic trailing slash addition In-Reply-To: References: Message-ID: <20141213080041.GI15670@daoine.org> On Fri, Dec 12, 2014 at 04:00:29PM -0800, neubyr wrote: Hi there, > I was wondering if it's possible to have separate namespaces for '/test' > and /test/'. They are different requests, and different request prefixes, so yes, they can be handled in different location{}s. > location /test { > root /usr/share/nginx/test; > } > > location /test/ { > root /usr/share/nginx/test-slash; > try_files $uri default.txt; > } > > I tried above configuration, but nginx adds trailing slash to all '/test' > requests before they get processed by location directives. So all requests > are going to '/test/' location. Have you evidence of this unexpected behaviour? A trailing slash should not be added by nginx before location{}-matching. > Is there any way to get separate namespace for /test and /test/ ? This not > for real world website. I am trying to learn more about location directive. What you have should do it. But I am not clear on what precisely you wish to happen. What request do you make? (Presumably something like "curl -i http://localhost/test" or "curl -i http://localhost/testA") What response do you get? (A http redirect? Or perhaps the content of a particular file on your filesystem?) What response do you want? (The content of a different file on your filesystem? Name the files, so it is clear where the expectation and the result are different) For what it's worth, enabling the debug log can show lots about what the location-matching engine is actually doing during a request. f -- Francis Daly francis at daoine.org From francis at daoine.org Sat Dec 13 08:07:11 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 13 Dec 2014 08:07:11 +0000 Subject: Content-Type header not proxied to downstream hosts In-Reply-To: <23c602fa95c46499c424745870171d2c.NginxMailingListEnglish@forum.nginx.org> References: <20141211082029.GH15670@daoine.org> <23c602fa95c46499c424745870171d2c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141213080711.GJ15670@daoine.org> On Thu, Dec 11, 2014 at 12:57:54PM -0500, hyperion wrote: Hi there, > Thanks again for your response. > > Using the info you provided I was able to debug and fix the issue. It's > actually an issue in the client library I'm using to make the request to > Nginx. You're welcome. It's good that you found your answer; and thanks for putting the result on-list so that the next person looking will be able to see the outcome. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat Dec 13 09:32:16 2014 From: nginx-forum at nginx.us (khav) Date: Sat, 13 Dec 2014 04:32:16 -0500 Subject: nginx + php-fpm = file not found In-Reply-To: References: Message-ID: I added cgi.fix_pathinfo=0 to php.ini and now i get no input file specified Thanks for helping me out Jader Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255466,255472#msg-255472 From nitrol at mail.ru Sat Dec 13 11:49:15 2014 From: nitrol at mail.ru (=?UTF-8?B?0Lw=?=) Date: Sat, 13 Dec 2014 14:49:15 +0300 Subject: I install BigBlueButton 0.9.0-beta end have problem nginx and do not install bbb Message-ID: <1418471355.566695786@f330.i.mail.ru>  I install BigBlueButton 0.9.0-beta end have problem nginx and do not install bbb make install full this manual https://code.google.com/p/bigbluebutton/wiki/090InstallationUbuntu now have problem how fix? nginx /var# sudo apt-get install -f nginx Reading package lists... Done Building dependency tree Reading state information... Done nginx is already the newest version. nginx set to manually installed. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: nginx : Depends: nginx-core (>= 1.4.6-1ubuntu3.1) but it is not going to be installed or nginx-full (>= 1.4.6-1ubuntu3.1) but it is not going to be installed or nginx-light (>= 1.4.6-1ubuntu3.1) but it is not going to be installed or nginx-extras (>= 1.4.6-1ubuntu3.1) but it is not going to be installed or nginx-naxsi (>= 1.4.6-1ubuntu3.1) but it is not going to be installed Depends: nginx-core (< 1.4.6-1ubuntu3.1.1~) but it is not going to be installed or nginx-full (< 1.4.6-1ubuntu3.1.1~) but it is not going to be installed or nginx-light (< 1.4.6-1ubuntu3.1.1~) but it is not going to be installed or nginx-extras (< 1.4.6-1ubuntu3.1.1~) but it is not going to be installed or nginx-naxsi (< 1.4.6-1ubuntu3.1.1~) but it is not going to be installed nginx-full-dbg : Depends: nginx-full (= 1.4.6-1ubuntu3.1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Dec 13 16:57:53 2014 From: nginx-forum at nginx.us (blackbic) Date: Sat, 13 Dec 2014 11:57:53 -0500 Subject: nginx & php-fpm [debug] 11: Resource temporarily unavailable Message-ID: I am having troubles with a wordpress plugin running a full batch of imports. I get this error when I enable the nginx debug. The result is I get an immediate 404 error afterwards and I am unable to fully import my data. I am pretty sure this is a bug, but I can't find the right answer to fix it. Please Help. **What I have done so far:** - It looked like a nginx bug and my nginx version was old, so I upgraded. No change. - It looked and still looks like it could be related to php-fpm. I've upgraded. No change. - I've disabled all of my plugins. No Change. **Server** - CentOS 6.0 - nginx v 1.0.15 - PHP-FPM v 5.3.3 (fpm-fcgi) - Webserver running 3 very low traffic sites - PHP-FPM is set to ondemmand **PHP.ini config:** - pm = ondemand - pm.process_idle_timeout = 50s - pm.max_children = 20 - pm.start_servers = 1 - pm.min_spare_servers = 3 - pm.max_spare_servers = 5 - pm.max_requests = 1024 - pm.status_path = /status I am unable to post my logs, so please check out the comparison **Nginx Log:** [12-Dec-2014 06:35:49.398315] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:50.399474] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:51.400765] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:52.402053] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:53.403346] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:53.417762] DEBUG: pid 13384, fpm_got_signal(), line 72: received SIGCHLD [12-Dec-2014 06:35:53.417836] DEBUG: pid 13384, fpm_children_bury(), line 254: [pool www] child 18327 has been killed by the process managment after 52.123053 seconds from start [12-Dec-2014 06:35:53.417863] DEBUG: pid 13384, fpm_event_loop(), line 411: event module triggered 1 events [12-Dec-2014 06:35:54.404978] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 0 spare children [12-Dec-2014 06:35:54.687559] DEBUG: pid 13384, fpm_children_make(), line 421: [pool www] child 18397 started [12-Dec-2014 06:35:54.687593] DEBUG: pid 13384, fpm_pctl_on_socket_accept(), line 536: [pool www] got accept without idle child available .... I forked [12-Dec-2014 06:35:54.687602] DEBUG: pid 13384, fpm_event_loop(), line 411: event module triggered 1 events [12-Dec-2014 06:35:55.406455] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:56.407633] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:57.408949] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:58.410111] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children **PHP log:** 2014/12/12 06:35:02 [debug] 13350#0: *223 http header done 2014/12/12 06:35:54 [debug] 13350#0: accept on 0.0.0.0:80, ready: 1 2014/12/12 06:35:54 [debug] 13350#0: posix_memalign: 0000000002273A80:256 @16 2014/12/12 06:35:54 [debug] 13350#0: *226 accept: 66.249.67.123 fd:3 2014/12/12 06:35:54 [debug] 13350#0: *226 event timer add: 3: 60000:1418387814684 2014/12/12 06:35:54 [debug] 13350#0: *226 epoll add event: fd:3 op:1 ev:80000001 2014/12/12 06:35:54 [debug] 13350#0: accept() not ready (11: Resource temporarily unavailable) 2014/12/12 06:35:54 [debug] 13350#0: *226 malloc: 0000000002274AF0:1296 2014/12/12 06:35:54 [debug] 13350#0: *226 posix_memalign: 0000000002273BE0:256 @16 2014/12/12 06:35:54 [debug] 13350#0: *226 malloc: 000000000232F4B0:131072 2014/12/12 06:35:54 [debug] 13350#0: *226 posix_memalign: 00000000021F7590:4096 @16 2014/12/12 06:35:54 [debug] 13350#0: *226 http process request line 2014/12/12 06:35:54 [debug] 13350#0: *226 recv: fd:3 315 of 131072 2014/12/12 06:35:54 [debug] 13350#0: *226 http request line: "GET /stores/giltcity/page/78/ HTTP/1.1" 2014/12/12 06:35:54 [debug] 13350#0: *226 http uri: "/stores/giltcity/page/78/" 2014/12/12 06:35:54 [debug] 13350#0: *226 http args: "" 2014/12/12 06:35:54 [debug] 13350#0: *226 http exten: "" 2014/12/12 06:35:54 [debug] 13350#0: *226 http process request header line 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Host: mydiscountman.com" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Connection: Keep-alive" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "From: googlebot(at)googlebot.com" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Accept-Encoding: gzip,deflate" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header done 2014/12/12 06:37:11 [debug] 13350#0: accept on 0.0.0.0:80, ready: 1 **Nginx Global Config /etc/nginx/nginx.conf :** user apache; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; multi_accept on; use epoll; } http { # Let NGINX get the real client IP for its access logs set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; # Basic Settings sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 20; client_max_body_size 15m; client_body_timeout 60; client_header_timeout 60; client_body_buffer_size 128k; client_header_buffer_size 128k; large_client_header_buffers 4 16k; send_timeout 60; reset_timedout_connection on; types_hash_max_size 8192; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings # access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; # Log Format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Gzip Settings gzip on; gzip_static on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_min_length 512; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml; # Virtual Host Configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } **Nginx Site Config /etc/nginx/sites-available/testme :** server { listen 80; server_name testme.XXXXXXX.com; port_in_redirect off; server_tokens off; autoindex off; client_max_body_size 15m; client_body_buffer_size 128k; access_log /var/log/nginx/testme/access_log main; error_log /var/log/nginx/testme/error_log; root /var/www/testme; index index.php index.html index.htm; try_files $uri $uri/ /index.php; error_page 404 /404error.html; location = /var/www/testme/404error.html { internal; } error_page 500 /500error.html; location = /var/www/testme/500error.html { internal; } # Define default caching of 24h expires 8s; add_header Pragma public; add_header Cache-Control "max-age=86400, public, must-revalidate, proxy-revalidate"; # Redirect server error pages to static 50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # Don't log robots.txt requests location = /robots.txt { allow all; log_not_found off; access_log off; } location /phpmyadmin { auth_basic "Restricted"; auth_basic_user_file /var/www/testme/phpmyadmin/.htpasswd; try_files $uri $uri/ index.html index.php; index index.html index.htm index.php; location ~ /\.ht { deny all; } location ~* ^.+\.(css|js)$ { #try_files $uri $uri/; #root /var/www/testme/phpmyadmin; access_log off; } location ~ ^.+\.php { try_files $uri $uri/ *.php; fastcgi_split_path_info ^(.+.php)(.*)$; fastcgi_pass unix:/var/run/php-fpm.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/testme$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } } # Rewrite for versioned CSS+JS via filemtime # location ~* ^.+\.(css|js) { # rewrite ^(.+)\.(\d+)\.(css|js)$ $1.$3 last; # expires 31536000s; # access_log on; # log_not_found on; # add_header Pragma public; # add_header Cache-Control "max-age=31536000, public"; # } # Aggressive caching for static files # If you alter static files often, please use # add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; location ~* \. (asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|ogv|otf|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|t?gz|tif|tiff|ttf|wav|webm|wma|woff|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; access_log on; log_not_found on; add_header Pragma public; add_header Cache-Control "max-age=31536000, public"; } location ~* (^(?!(?:(?!(php|inc)).)*/uploads/).*?(php)) { set $php_root $document_root; if ($request_uri ~* /phpmyadmin) { #set $php_root /usr/share; } try_files $uri = 404; fastcgi_split_path_info ^(.+.php)(.*)$; fastcgi_pass unix:/var/run/php-fpm.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 3600; fastcgi_send_timeout 3600; fastcgi_read_timeout 3600; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255476,255476#msg-255476 From nginx-forum at nginx.us Sat Dec 13 16:57:53 2014 From: nginx-forum at nginx.us (blackbic) Date: Sat, 13 Dec 2014 11:57:53 -0500 Subject: nginx & php-fpm [debug] 11: Resource temporarily unavailable Message-ID: I am having troubles with a wordpress plugin running a full batch of imports. I get this error when I enable the nginx debug. The result is I get an immediate 404 error afterwards and I am unable to fully import my data. I am pretty sure this is a bug, but I can't find the right answer to fix it. Please Help. **What I have done so far:** - It looked like a nginx bug and my nginx version was old, so I upgraded. No change. - It looked and still looks like it could be related to php-fpm. I've upgraded. No change. - I've disabled all of my plugins. No Change. **Server** - CentOS 6.0 - nginx v 1.0.15 - PHP-FPM v 5.3.3 (fpm-fcgi) - Webserver running 3 very low traffic sites - PHP-FPM is set to ondemmand **PHP.ini config:** - pm = ondemand - pm.process_idle_timeout = 50s - pm.max_children = 20 - pm.start_servers = 1 - pm.min_spare_servers = 3 - pm.max_spare_servers = 5 - pm.max_requests = 1024 - pm.status_path = /status I am unable to post my logs, so please check out the comparison **Nginx Log:** [12-Dec-2014 06:35:49.398315] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:50.399474] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:51.400765] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:52.402053] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:53.403346] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:53.417762] DEBUG: pid 13384, fpm_got_signal(), line 72: received SIGCHLD [12-Dec-2014 06:35:53.417836] DEBUG: pid 13384, fpm_children_bury(), line 254: [pool www] child 18327 has been killed by the process managment after 52.123053 seconds from start [12-Dec-2014 06:35:53.417863] DEBUG: pid 13384, fpm_event_loop(), line 411: event module triggered 1 events [12-Dec-2014 06:35:54.404978] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 0 spare children [12-Dec-2014 06:35:54.687559] DEBUG: pid 13384, fpm_children_make(), line 421: [pool www] child 18397 started [12-Dec-2014 06:35:54.687593] DEBUG: pid 13384, fpm_pctl_on_socket_accept(), line 536: [pool www] got accept without idle child available .... I forked [12-Dec-2014 06:35:54.687602] DEBUG: pid 13384, fpm_event_loop(), line 411: event module triggered 1 events [12-Dec-2014 06:35:55.406455] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:56.407633] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:57.408949] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children [12-Dec-2014 06:35:58.410111] DEBUG: pid 13384, fpm_pctl_perform_idle_server_maintenance(), line 362: [pool www] currently 0 active children, 1 spare children **PHP log:** 2014/12/12 06:35:02 [debug] 13350#0: *223 http header done 2014/12/12 06:35:54 [debug] 13350#0: accept on 0.0.0.0:80, ready: 1 2014/12/12 06:35:54 [debug] 13350#0: posix_memalign: 0000000002273A80:256 @16 2014/12/12 06:35:54 [debug] 13350#0: *226 accept: 66.249.67.123 fd:3 2014/12/12 06:35:54 [debug] 13350#0: *226 event timer add: 3: 60000:1418387814684 2014/12/12 06:35:54 [debug] 13350#0: *226 epoll add event: fd:3 op:1 ev:80000001 2014/12/12 06:35:54 [debug] 13350#0: accept() not ready (11: Resource temporarily unavailable) 2014/12/12 06:35:54 [debug] 13350#0: *226 malloc: 0000000002274AF0:1296 2014/12/12 06:35:54 [debug] 13350#0: *226 posix_memalign: 0000000002273BE0:256 @16 2014/12/12 06:35:54 [debug] 13350#0: *226 malloc: 000000000232F4B0:131072 2014/12/12 06:35:54 [debug] 13350#0: *226 posix_memalign: 00000000021F7590:4096 @16 2014/12/12 06:35:54 [debug] 13350#0: *226 http process request line 2014/12/12 06:35:54 [debug] 13350#0: *226 recv: fd:3 315 of 131072 2014/12/12 06:35:54 [debug] 13350#0: *226 http request line: "GET /stores/giltcity/page/78/ HTTP/1.1" 2014/12/12 06:35:54 [debug] 13350#0: *226 http uri: "/stores/giltcity/page/78/" 2014/12/12 06:35:54 [debug] 13350#0: *226 http args: "" 2014/12/12 06:35:54 [debug] 13350#0: *226 http exten: "" 2014/12/12 06:35:54 [debug] 13350#0: *226 http process request header line 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Host: mydiscountman.com" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Connection: Keep-alive" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "From: googlebot(at)googlebot.com" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "Accept-Encoding: gzip,deflate" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header: "User-Agent: Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" 2014/12/12 06:35:54 [debug] 13350#0: *226 http header done 2014/12/12 06:37:11 [debug] 13350#0: accept on 0.0.0.0:80, ready: 1 **Nginx Global Config /etc/nginx/nginx.conf :** user apache; worker_processes 1; pid /var/run/nginx.pid; events { worker_connections 768; multi_accept on; use epoll; } http { # Let NGINX get the real client IP for its access logs set_real_ip_from 127.0.0.1; real_ip_header X-Forwarded-For; # Basic Settings sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 20; client_max_body_size 15m; client_body_timeout 60; client_header_timeout 60; client_body_buffer_size 128k; client_header_buffer_size 128k; large_client_header_buffers 4 16k; send_timeout 60; reset_timedout_connection on; types_hash_max_size 8192; server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings # access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log debug; # Log Format log_format main '$remote_addr - $remote_user [$time_local] ' '"$request" $status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; # Gzip Settings gzip on; gzip_static on; gzip_disable "msie6"; gzip_vary on; gzip_proxied any; gzip_comp_level 6; gzip_min_length 512; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/css text/javascript text/xml text/plain text/x-component application/javascript application/x-javascript application/json application/xml application/rss+xml font/truetype application/x-font-ttf font/opentype application/vnd.ms-fontobject image/svg+xml; # Virtual Host Configs include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } **Nginx Site Config /etc/nginx/sites-available/testme :** server { listen 80; server_name testme.XXXXXXX.com; port_in_redirect off; server_tokens off; autoindex off; client_max_body_size 15m; client_body_buffer_size 128k; access_log /var/log/nginx/testme/access_log main; error_log /var/log/nginx/testme/error_log; root /var/www/testme; index index.php index.html index.htm; try_files $uri $uri/ /index.php; error_page 404 /404error.html; location = /var/www/testme/404error.html { internal; } error_page 500 /500error.html; location = /var/www/testme/500error.html { internal; } # Define default caching of 24h expires 8s; add_header Pragma public; add_header Cache-Control "max-age=86400, public, must-revalidate, proxy-revalidate"; # Redirect server error pages to static 50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # Don't log robots.txt requests location = /robots.txt { allow all; log_not_found off; access_log off; } location /phpmyadmin { auth_basic "Restricted"; auth_basic_user_file /var/www/testme/phpmyadmin/.htpasswd; try_files $uri $uri/ index.html index.php; index index.html index.htm index.php; location ~ /\.ht { deny all; } location ~* ^.+\.(css|js)$ { #try_files $uri $uri/; #root /var/www/testme/phpmyadmin; access_log off; } location ~ ^.+\.php { try_files $uri $uri/ *.php; fastcgi_split_path_info ^(.+.php)(.*)$; fastcgi_pass unix:/var/run/php-fpm.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /var/www/testme$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } } # Rewrite for versioned CSS+JS via filemtime # location ~* ^.+\.(css|js) { # rewrite ^(.+)\.(\d+)\.(css|js)$ $1.$3 last; # expires 31536000s; # access_log on; # log_not_found on; # add_header Pragma public; # add_header Cache-Control "max-age=31536000, public"; # } # Aggressive caching for static files # If you alter static files often, please use # add_header Cache-Control "max-age=31536000, public, must-revalidate, proxy-revalidate"; location ~* \. (asf|asx|wax|wmv|wmx|avi|bmp|class|divx|doc|docx|eot|exe|gif|gz|gzip|ico|jpg|jpeg|jpe|mdb|mid|midi|mov|qt|mp3|m4a|mp4|m4v|mpeg|mpg|mpe|mpp|odb|odc|odf|odg|odp|ods|odt|ogg|ogv|otf|pdf|png|pot|pps|ppt|pptx|ra|ram|svg|svgz|swf|tar|t?gz|tif|tiff|ttf|wav|webm|wma|woff|wri|xla|xls|xlsx|xlt|xlw|zip)$ { expires 31536000s; access_log on; log_not_found on; add_header Pragma public; add_header Cache-Control "max-age=31536000, public"; } location ~* (^(?!(?:(?!(php|inc)).)*/uploads/).*?(php)) { set $php_root $document_root; if ($request_uri ~* /phpmyadmin) { #set $php_root /usr/share; } try_files $uri = 404; fastcgi_split_path_info ^(.+.php)(.*)$; fastcgi_pass unix:/var/run/php-fpm.socket; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $php_root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; include /etc/nginx/fastcgi_params; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 3600; fastcgi_send_timeout 3600; fastcgi_read_timeout 3600; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255477,255477#msg-255477 From nginx-forum at nginx.us Sat Dec 13 21:51:37 2014 From: nginx-forum at nginx.us (blackbic) Date: Sat, 13 Dec 2014 16:51:37 -0500 Subject: nginx & php-fpm [debug] 11: Resource temporarily unavailable In-Reply-To: References: Message-ID: so, the forking command is not a bug. The PHP process stopping 100% due to the child being killed is the fundamental problem. Why or how I am still unsure. Any ideas anyone? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255477,255478#msg-255478 From nginx-forum at nginx.us Sat Dec 13 22:53:35 2014 From: nginx-forum at nginx.us (blackbic) Date: Sat, 13 Dec 2014 17:53:35 -0500 Subject: nginx & php-fpm [debug] 11: Resource temporarily unavailable In-Reply-To: References: Message-ID: <3a52d7204338f30da6f613ce264d2198.NginxMailingListEnglish@forum.nginx.org> *New Development* The core php error logs are below, but the site specific error logs show this. The memory on my php.ini file is : memory_size 1024; so it's not php's memory limit. I've also disabled all my plugin's, so it's not the memory limit that is on my security plugin. 2014/12/13 16:12:40 [error] 28264#0: *212 FastCGI sent in stderr: "PHP message: PHP Fatal error: Allowed memory size of 268435456 bytes exhausted (tried to allocate 71 bytes) in /var/www/.. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255477,255479#msg-255479 From neubyr at gmail.com Sat Dec 13 23:53:32 2014 From: neubyr at gmail.com (neubyr) Date: Sat, 13 Dec 2014 15:53:32 -0800 Subject: stop automatic trailing slash addition In-Reply-To: <20141213080041.GI15670@daoine.org> References: <20141213080041.GI15670@daoine.org> Message-ID: On Sat, Dec 13, 2014 at 12:00 AM, Francis Daly wrote: > > On Fri, Dec 12, 2014 at 04:00:29PM -0800, neubyr wrote: > > Hi there, > > > I was wondering if it's possible to have separate namespaces for '/test' > > and /test/'. > > They are different requests, and different request prefixes, so yes, > they can be handled in different location{}s. > > > location /test { > > root /usr/share/nginx/test; > > } > > > > location /test/ { > > root /usr/share/nginx/test-slash; > > try_files $uri default.txt; > > } > > > > I tried above configuration, but nginx adds trailing slash to all '/test' > > requests before they get processed by location directives. So all > requests > > are going to '/test/' location. > > Have you evidence of this unexpected behaviour? > > A trailing slash should not be added by nginx before location{}-matching. > > > Is there any way to get separate namespace for /test and /test/ ? This > not > > for real world website. I am trying to learn more about location > directive. > > What you have should do it. But I am not clear on what precisely you > wish to happen. > > What request do you make? (Presumably something like "curl -i > http://localhost/test" or "curl -i http://localhost/testA") > > What response do you get? (A http redirect? Or perhaps the content of > a particular file on your filesystem?) > > What response do you want? (The content of a different file on your > filesystem? Name the files, so it is clear where the expectation and > the result are different) > > For what it's worth, enabling the debug log can show lots about what > the location-matching engine is actually doing during a request. > > > Thank you for pointing out debugging log. I think that helped in explaining this behavior. It seems like nginx is adding slash as uri name matches with corresponding directory and not file name. I thought nginx will return 404 in this case, but it adds trailing slash when matching directory is found. After adding trailing slash uri becomes /test/ and hence it matches next location block. Hope my understanding of debugging log is correct. Let me know if I am missing anything. Below is a snippet from debug log. 2014/12/13 23:20:00 [debug] 15839#0: epoll: fd:6 ev:0001 d:00007F0AB9F9E010 2014/12/13 23:20:00 [debug] 15839#0: accept on 0.0.0.0:80, ready: 0 2014/12/13 23:20:00 [debug] 15839#0: posix_memalign: 00007F0ABBA0C440:256 @16 2014/12/13 23:20:00 [debug] 15839#0: *6 accept: 50.136.134.241 fd:3 2014/12/13 23:20:00 [debug] 15839#0: posix_memalign: 00007F0ABBA27070:256 @16 2014/12/13 23:20:00 [debug] 15839#0: *6 event timer add: 3: 60000:1418512860643 2014/12/13 23:20:00 [debug] 15839#0: *6 reusable connection: 1 2014/12/13 23:20:00 [debug] 15839#0: *6 epoll add event: fd:3 op:1 ev:80002001 2014/12/13 23:20:00 [debug] 15839#0: timer delta: 129801 2014/12/13 23:20:00 [debug] 15839#0: posted events 0000000000000000 2014/12/13 23:20:00 [debug] 15839#0: worker cycle 2014/12/13 23:20:00 [debug] 15839#0: epoll timer: 60000 2014/12/13 23:20:00 [debug] 15839#0: epoll: fd:3 ev:0001 d:00007F0AB9F9E1C1 2014/12/13 23:20:00 [debug] 15839#0: *6 http wait request handler 2014/12/13 23:20:00 [debug] 15839#0: *6 malloc: 00007F0ABB99BC90:1024 2014/12/13 23:20:00 [debug] 15839#0: *6 recv: fd:3 77 of 1024 2014/12/13 23:20:00 [debug] 15839#0: *6 reusable connection: 0 2014/12/13 23:20:00 [debug] 15839#0: *6 posix_memalign: 00007F0ABB9E6EE0:4096 @16 2014/12/13 23:20:00 [debug] 15839#0: *6 http process request line 2014/12/13 23:20:00 [debug] 15839#0: *6 http request line: "GET /test HTTP/1.1" 2014/12/13 23:20:00 [debug] 15839#0: *6 http uri: "/test" 2014/12/13 23:20:00 [debug] 15839#0: *6 http args: "" 2014/12/13 23:20:00 [debug] 15839#0: *6 http exten: "" 2014/12/13 23:20:00 [debug] 15839#0: *6 http process request header line 2014/12/13 23:20:00 [debug] 15839#0: *6 http header: "User-Agent: curl/7.30.0" 2014/12/13 23:20:00 [debug] 15839#0: *6 http header: "Host: example.org" 2014/12/13 23:20:00 [debug] 15839#0: *6 http header: "Accept: */*" 2014/12/13 23:20:00 [debug] 15839#0: *6 http header done 2014/12/13 23:20:00 [debug] 15839#0: *6 event timer del: 3: 1418512860643 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 0 2014/12/13 23:20:00 [debug] 15839#0: *6 rewrite phase: 1 2014/12/13 23:20:00 [debug] 15839#0: *6 test location: "/50x.html" 2014/12/13 23:20:00 [debug] 15839#0: *6 test location: "/test" 2014/12/13 23:20:00 [debug] 15839#0: *6 using configuration "/test" 2014/12/13 23:20:00 [debug] 15839#0: *6 http cl:-1 max:1048576 2014/12/13 23:20:00 [debug] 15839#0: *6 rewrite phase: 3 2014/12/13 23:20:00 [debug] 15839#0: *6 post rewrite phase: 4 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 5 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 6 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 7 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 8 2014/12/13 23:20:00 [debug] 15839#0: *6 access phase: 9 2014/12/13 23:20:00 [debug] 15839#0: *6 access phase: 10 2014/12/13 23:20:00 [debug] 15839#0: *6 post access phase: 11 2014/12/13 23:20:00 [debug] 15839#0: *6 try files phase: 12 2014/12/13 23:20:00 [debug] 15839#0: *6 http script var: "/test" 2014/12/13 23:20:00 [debug] 15839#0: *6 trying to use file: "/test" "/usr/share/nginx/test/test" 2014/12/13 23:20:00 [debug] 15839#0: *6 trying to use file: "/test/default.txt" "/usr/share/nginx/test/test/default.txt" 2014/12/13 23:20:00 [debug] 15839#0: *6 internal redirect: "/test/default.txt?" 2014/12/13 23:20:00 [debug] 15839#0: *6 rewrite phase: 1 2014/12/13 23:20:00 [debug] 15839#0: *6 test location: "/50x.html" 2014/12/13 23:20:00 [debug] 15839#0: *6 test location: "/test" 2014/12/13 23:20:00 [debug] 15839#0: *6 test location: "/" 2014/12/13 23:20:00 [debug] 15839#0: *6 using configuration "/test/" 2014/12/13 23:20:00 [debug] 15839#0: *6 http cl:-1 max:1048576 2014/12/13 23:20:00 [debug] 15839#0: *6 rewrite phase: 3 2014/12/13 23:20:00 [debug] 15839#0: *6 post rewrite phase: 4 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 5 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 6 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 7 2014/12/13 23:20:00 [debug] 15839#0: *6 generic phase: 8 2014/12/13 23:20:00 [debug] 15839#0: *6 access phase: 9 2014/12/13 23:20:00 [debug] 15839#0: *6 access phase: 10 2014/12/13 23:20:00 [debug] 15839#0: *6 post access phase: 11 2014/12/13 23:20:00 [debug] 15839#0: *6 try files phase: 12 2014/12/13 23:20:00 [debug] 15839#0: *6 posix_memalign: 00007F0ABB9A8200:4096 @16 2014/12/13 23:20:00 [debug] 15839#0: *6 http script var: "/test/default.txt" 2014/12/13 23:20:00 [debug] 15839#0: *6 trying to use file: "/test/default.txt" "/usr/share/nginx/test-slash/test/default.txt" 2014/12/13 23:20:00 [debug] 15839#0: *6 try file uri: "/test/default.txt" 2014/12/13 23:20:00 [debug] 15839#0: *6 content phase: 13 2014/12/13 23:20:00 [debug] 15839#0: *6 content phase: 14 2014/12/13 23:20:00 [debug] 15839#0: *6 content phase: 15 2014/12/13 23:20:00 [debug] 15839#0: *6 content phase: 16 2014/12/13 23:20:00 [debug] 15839#0: *6 content phase: 17 2014/12/13 23:20:00 [debug] 15839#0: *6 content phase: 18 2014/12/13 23:20:00 [debug] 15839#0: *6 http filename: "/usr/share/nginx/test-slash/test/default.txt" 2014/12/13 23:20:00 [debug] 15839#0: *6 add cleanup: 00007F0ABB9A8260 2014/12/13 23:20:00 [debug] 15839#0: *6 http static fd: 10 2014/12/13 23:20:00 [debug] 15839#0: *6 http set discard body 2014/12/13 23:20:00 [debug] 15839#0: *6 xslt filter header 2014/12/13 23:20:00 [debug] 15839#0: *6 HTTP/1.1 200 OK Server: nginx/1.6.2 Date: Sat, 13 Dec 2014 23:20:00 GMT Content-Type: text/plain Content-Length: 19 Last-Modified: Sat, 13 Dec 2014 23:17:22 GMT Connection: keep-alive ETag: "548cc902-13" Accept-Ranges: bytes 2014/12/13 23:20:00 [debug] 15839#0: *6 write new buf t:1 f:0 00007F0ABB9A8430, pos 00007F0ABB9A8430, size: 236 file: 0, size: 0 2014/12/13 23:20:00 [debug] 15839#0: *6 http write filter: l:0 f:0 s:236 2014/12/13 23:20:00 [debug] 15839#0: *6 http output filter "/test/default.txt?" 2014/12/13 23:20:00 [debug] 15839#0: *6 http copy filter: "/test/default.txt?" 2014/12/13 23:20:00 [debug] 15839#0: *6 image filter 2014/12/13 23:20:00 [debug] 15839#0: *6 xslt filter body ... ... --- Thanks, - N -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at indietorrent.org Sat Dec 13 23:58:54 2014 From: ben at indietorrent.org (Ben Johnson) Date: Sat, 13 Dec 2014 18:58:54 -0500 Subject: nginx: [emerg] unknown directive "upload_pass" after dist-upgrade from Ubuntu 12.04 LTS to 14.04 LTS In-Reply-To: <4787270.sAYXIEeGid@vbart-laptop> References: <53F696B3.4090001@indietorrent.org> <2840547.D8BRW6TQ5S@vbart-laptop> <53F7BC0E.2090708@indietorrent.org> <4787270.sAYXIEeGid@vbart-laptop> Message-ID: <548CD2BE.4040708@indietorrent.org> On 8/22/2014 7:12 PM, Valentin V. Bartenev wrote: > On Friday 22 August 2014 17:54:22 Ben Johnson wrote: > [..] >> >> Thank you kindly, Valentin. That explains it! >> >> Well, that's a real disappointment. Is it no longer possible for nginx >> to handle uploads in a similar manner? This was one of my favorite >> features of nginx: the ability to offload large file uploads from PHP >> onto nginx. >> > [..] > > Could you elaborate a bit what's the ability you're speaking about? > > By default, nginx is good enough in offloading large file uploads. > > For example: > > location /upload { > fastcgi_pass backend; > > fastcgi_pass_request_body off; > fastcgi_param UPLOADED_FILENAME $request_body_file; > > client_body_in_file_only on; > } > > With the configuration above nginx only passes the name of the > uploaded file. > > Reference: > http://nginx.org/r/fastcgi_pass_request_body > http://nginx.org/r/client_body_in_file_only > http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request_body_file > > wbr, Valentin V. Bartenev > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Hello, I apologize for the 4-month delay in responding. :) In particular, I need to have the ability to track upload progress in a manner that is conducive to displaying the percentage complete via progress bar. Is this still possible, absent the defunct module at http://wiki.nginx.org/HttpUploadProgressModule ? Thank you! -Ben From vbart at nginx.com Sun Dec 14 00:10:57 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Dec 2014 03:10:57 +0300 Subject: nginx: [emerg] unknown directive "upload_pass" after dist-upgrade from Ubuntu 12.04 LTS to 14.04 LTS In-Reply-To: <548CD2BE.4040708@indietorrent.org> References: <53F696B3.4090001@indietorrent.org> <4787270.sAYXIEeGid@vbart-laptop> <548CD2BE.4040708@indietorrent.org> Message-ID: <2558436.XRGpHQqAf4@vbart-laptop> On Saturday 13 December 2014 18:58:54 Ben Johnson wrote: [..] > Hello, > > I apologize for the 4-month delay in responding. :) > > In particular, I need to have the ability to track upload progress in a > manner that is conducive to displaying the percentage complete via > progress bar. > > Is this still possible, absent the defunct module at > http://wiki.nginx.org/HttpUploadProgressModule ? > You can easily implement it on the browser side using the progress events of XMLHttpRequest, which is supported by all modern browsers. wbr, Valentin V. Bartenev From vbart at nginx.com Sun Dec 14 00:25:52 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sun, 14 Dec 2014 03:25:52 +0300 Subject: stop automatic trailing slash addition In-Reply-To: References: <20141213080041.GI15670@daoine.org> Message-ID: <8254339.fzNoWNlvKl@vbart-laptop> On Saturday 13 December 2014 15:53:32 neubyr wrote: [..] > Thank you for pointing out debugging log. I think that helped in explaining > this behavior. > > It seems like nginx is adding slash as uri name matches with corresponding > directory and not file name. I thought nginx will return 404 in this case, > but it adds trailing slash when matching directory is found. > > After adding trailing slash uri becomes /test/ and hence it matches next > location block. > > Hope my understanding of debugging log is correct. Let me know if I am > missing anything. [..] >From the debug log it looks like that you have the try_files directive in your "/test" location block. And the config you have mentioned before isn't the one you are actually using. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Sun Dec 14 02:34:08 2014 From: nginx-forum at nginx.us (hmtmcse) Date: Sat, 13 Dec 2014 21:34:08 -0500 Subject: Post Request Time Out Message-ID: <21db210cf39f9e3e420570e3100f9235.NginxMailingListEnglish@forum.nginx.org> Hi i create a module which is parse a post request. But it able to parse first post but other are getting time out what's the problem? Registering parser by ngx_http_read_client_request_body(r, ngx_http_ab_router_post_read); and try to parse request by b = r->request_body->bufs->buf; if (ngx_buf_size(b) == 0) { return NGX_OK; } buf = b->pos; last = b->last; please help me Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255484,255484#msg-255484 From neubyr at gmail.com Sun Dec 14 07:39:46 2014 From: neubyr at gmail.com (neubyr) Date: Sat, 13 Dec 2014 23:39:46 -0800 Subject: stop automatic trailing slash addition In-Reply-To: <8254339.fzNoWNlvKl@vbart-laptop> References: <20141213080041.GI15670@daoine.org> <8254339.fzNoWNlvKl@vbart-laptop> Message-ID: On Sat, Dec 13, 2014 at 4:25 PM, Valentin V. Bartenev wrote: > > On Saturday 13 December 2014 15:53:32 neubyr wrote: > [..] > > Thank you for pointing out debugging log. I think that helped in > explaining > > this behavior. > > > > It seems like nginx is adding slash as uri name matches with > corresponding > > directory and not file name. I thought nginx will return 404 in this > case, > > but it adds trailing slash when matching directory is found. > > > > After adding trailing slash uri becomes /test/ and hence it matches next > > location block. > > > > Hope my understanding of debugging log is correct. Let me know if I am > > missing anything. > [..] > > From the debug log it looks like that you have the try_files directive > in your "/test" location block. And the config you have mentioned before > isn't the one you are actually using. > > wbr, Valentin V. Bartenev > > > Thanks for pointing it out. It's giving same result even without try_files directive in '/test' location block. 2014/12/14 07:33:09 [debug] 16414#0: epoll: fd:6 ev:0001 d:00007F1B949F4010 2014/12/14 07:33:09 [debug] 16414#0: accept on 0.0.0.0:80, ready: 0 2014/12/14 07:33:09 [debug] 16414#0: posix_memalign: 00007F1B965F9440:256 @16 2014/12/14 07:33:09 [debug] 16414#0: *2 accept: 50.136.134.241 fd:3 2014/12/14 07:33:09 [debug] 16414#0: posix_memalign: 00007F1B96614070:256 @16 2014/12/14 07:33:09 [debug] 16414#0: *2 event timer add: 3: 60000:1418542449679 2014/12/14 07:33:09 [debug] 16414#0: *2 reusable connection: 1 2014/12/14 07:33:09 [debug] 16414#0: *2 epoll add event: fd:3 op:1 ev:80002001 2014/12/14 07:33:09 [debug] 16414#0: timer delta: 112608 2014/12/14 07:33:09 [debug] 16414#0: posted events 0000000000000000 2014/12/14 07:33:09 [debug] 16414#0: worker cycle 2014/12/14 07:33:09 [debug] 16414#0: epoll timer: 60000 2014/12/14 07:33:09 [debug] 16414#0: epoll: fd:3 ev:0001 d:00007F1B949F41C1 2014/12/14 07:33:09 [debug] 16414#0: *2 http wait request handler 2014/12/14 07:33:09 [debug] 16414#0: *2 malloc: 00007F1B96588C90:1024 2014/12/14 07:33:09 [debug] 16414#0: *2 recv: fd:3 77 of 1024 2014/12/14 07:33:09 [debug] 16414#0: *2 reusable connection: 0 2014/12/14 07:33:09 [debug] 16414#0: *2 posix_memalign: 00007F1B965D3EE0:4096 @16 2014/12/14 07:33:09 [debug] 16414#0: *2 http process request line 2014/12/14 07:33:09 [debug] 16414#0: *2 http request line: "GET /test HTTP/1.1" 2014/12/14 07:33:09 [debug] 16414#0: *2 http uri: "/test" 2014/12/14 07:33:09 [debug] 16414#0: *2 http args: "" 2014/12/14 07:33:09 [debug] 16414#0: *2 http exten: "" 2014/12/14 07:33:09 [debug] 16414#0: *2 http process request header line 2014/12/14 07:33:09 [debug] 16414#0: *2 http header: "User-Agent: curl/7.30.0" 2014/12/14 07:33:09 [debug] 16414#0: *2 http header: "Host: example.org" 2014/12/14 07:33:09 [debug] 16414#0: *2 http header: "Accept: */*" 2014/12/14 07:33:09 [debug] 16414#0: *2 http header done 2014/12/14 07:33:09 [debug] 16414#0: *2 event timer del: 3: 1418542449679 2014/12/14 07:33:09 [debug] 16414#0: *2 generic phase: 0 2014/12/14 07:33:09 [debug] 16414#0: *2 rewrite phase: 1 2014/12/14 07:33:09 [debug] 16414#0: *2 test location: "/50x.html" 2014/12/14 07:33:09 [debug] 16414#0: *2 test location: "/test" 2014/12/14 07:33:09 [debug] 16414#0: *2 using configuration "/test" 2014/12/14 07:33:09 [debug] 16414#0: *2 http cl:-1 max:1048576 2014/12/14 07:33:09 [debug] 16414#0: *2 rewrite phase: 3 2014/12/14 07:33:09 [debug] 16414#0: *2 post rewrite phase: 4 2014/12/14 07:33:09 [debug] 16414#0: *2 generic phase: 5 2014/12/14 07:33:09 [debug] 16414#0: *2 generic phase: 6 2014/12/14 07:33:09 [debug] 16414#0: *2 generic phase: 7 2014/12/14 07:33:09 [debug] 16414#0: *2 generic phase: 8 2014/12/14 07:33:09 [debug] 16414#0: *2 access phase: 9 2014/12/14 07:33:09 [debug] 16414#0: *2 access phase: 10 2014/12/14 07:33:09 [debug] 16414#0: *2 post access phase: 11 2014/12/14 07:33:09 [debug] 16414#0: *2 try files phase: 12 2014/12/14 07:33:09 [debug] 16414#0: *2 content phase: 13 2014/12/14 07:33:09 [debug] 16414#0: *2 content phase: 14 2014/12/14 07:33:09 [debug] 16414#0: *2 content phase: 15 2014/12/14 07:33:09 [debug] 16414#0: *2 content phase: 16 2014/12/14 07:33:09 [debug] 16414#0: *2 content phase: 17 2014/12/14 07:33:09 [debug] 16414#0: *2 content phase: 18 2014/12/14 07:33:09 [debug] 16414#0: *2 http filename: "/usr/share/nginx/test/test" 2014/12/14 07:33:09 [debug] 16414#0: *2 add cleanup: 00007F1B965D4E98 2014/12/14 07:33:09 [debug] 16414#0: *2 http static fd: -1 2014/12/14 07:33:09 [debug] 16414#0: *2 http dir 2014/12/14 07:33:09 [debug] 16414#0: *2 posix_memalign: 00007F1B96595200:4096 @16 2014/12/14 07:33:09 [debug] 16414#0: *2 http finalize request: 301, "/test?" a:1, c:1 2014/12/14 07:33:09 [debug] 16414#0: *2 http special response: 301, "/test?" 2014/12/14 07:33:09 [debug] 16414#0: *2 http set discard body 2014/12/14 07:33:09 [debug] 16414#0: *2 xslt filter header 2014/12/14 07:33:09 [debug] 16414#0: *2 HTTP/1.1 301 Moved Permanently Server: nginx/1.6.2 Date: Sun, 14 Dec 2014 07:33:09 GMT Content-Type: text/html Content-Length: 184 Location: http://example.org/test/ Connection: keep-alive 2014/12/14 07:33:09 [debug] 16414#0: *2 write new buf t:1 f:0 00007F1B965952A0, pos 00007F1B965952A0, size: 196 file: 0, size: 0 2014/12/14 07:33:09 [debug] 16414#0: *2 http write filter: l:0 f:0 s:196 2014/12/14 07:33:09 [debug] 16414#0: *2 http output filter "/test?" 2014/12/14 07:33:09 [debug] 16414#0: *2 http copy filter: "/test?" 2014/12/14 07:33:09 [debug] 16414#0: *2 image filter 2014/12/14 07:33:09 [debug] 16414#0: *2 xslt filter body - N -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Dec 14 10:52:47 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 14 Dec 2014 10:52:47 +0000 Subject: stop automatic trailing slash addition In-Reply-To: References: <20141213080041.GI15670@daoine.org> Message-ID: <20141214105247.GK15670@daoine.org> On Sat, Dec 13, 2014 at 03:53:32PM -0800, neubyr wrote: > On Sat, Dec 13, 2014 at 12:00 AM, Francis Daly wrote: > > On Fri, Dec 12, 2014 at 04:00:29PM -0800, neubyr wrote: Hi there, > > > location /test { > > > root /usr/share/nginx/test; > > > } > > > > > > location /test/ { > > > root /usr/share/nginx/test-slash; > > > try_files $uri default.txt; > > > } > > What request do you make? (Presumably something like "curl -i > > http://localhost/test" or "curl -i http://localhost/testA") > > > > What response do you get? (A http redirect? Or perhaps the content of > > a particular file on your filesystem?) > > > > What response do you want? (The content of a different file on your > > filesystem? Name the files, so it is clear where the expectation and > > the result are different) Those questions still need answers, if you want someone else to be able to tell you how to configure nginx to do your non-standard thing. (If such a thing is even possible without extra coding.) The "standard" thing is: requests that map directly to the filesystem that match a directory that omit a trailing slash get a http 301 response to include the trailing slash. That is usually what is wanted. If you want something else, you'll have to be very clear about what it is that you want. The machine will do exactly what you configure it to do, not what you hope it will do. > It seems like nginx is adding slash as uri name matches with corresponding > directory and not file name. I thought nginx will return 404 in this case, > but it adds trailing slash when matching directory is found. I'm not sure why you expected a 404 if a non-slash request names an existing directory by default. That's not the common thing to want. You can get that by using (for example) try_files $uri =404; but that probably will not do what you want if the request *does* end in a slash. > After adding trailing slash uri becomes /test/ and hence it matches next > location block. One request is handled in one location. Depending on what actual configuration you used, the "/test/" url might count as a new subrequest and start a new location search, or it might not. See the documentation for whatever directive you are using (explicitly or implicitly.) For example using, try_files $uri $uri/index.html =404; in both /test and /test/ locations, a request for /test would send you the contents of the file /usr/share/nginx/test/test/index.html if it exists (unless other configuration says not to), while a request for /test/ would send you the contents of the file /usr/share/nginx/test-slash/test/index.html if it exists. (Your "try_files $uri default.txt;" is unlikely to give you a useful result.) Good luck with it, f -- Francis Daly francis at daoine.org From jhihn at gmx.com Sun Dec 14 15:43:51 2014 From: jhihn at gmx.com (Jason H) Date: Sun, 14 Dec 2014 16:43:51 +0100 Subject: nginx is eating my client request - multipart/form-data file upload Message-ID: I am new to nginx, but am familar with low-level HTTP and apache. When I try to do a multipart/form file upload, nginx writes some of the client request body to disk, but never finishes and it never passes it to the down/upstream script. My specific setup is I have nginx with /dyn redirected to localhost:1337, where a node.js instance is listening. It works... except for the file upload handler. Also in the config is a /debug which is redirected to localhost:1338, which goes to a simple socket dump server for viewing the post. I changed the error log handling to 'info'. It reports storing the client body to a file and when I examine it, it is almost as I expected: --boundary_.oOo._MjM5NzEwOTkxMzU2MjA0NjM5MTQxNDA3MjYwOA== Content-Type: image/jpeg Content-Disposition: form-data; name="file"; filename="dccde7b5-25aa-4bb2-96a6-81e9358f2252.jpg" The problem with this file is too short, only 81,920 bytes (only 80k, exactly) when the file is 88,963 bytes, it should be 88,963 + the header above.... But that is literally only half of it. There are 2 files (about the same size, 90k) that are coming in, so I would expect that file to be about ~180k. What nginx is then doing is reassigning the request's http-level Content-length to 357ish bytes then passes that header on to the script, that's it and of course my script complains that it never finds --boundary_.oOo._MjM5NzEwOTkxMzU2MjA0NjM5MTQxNDA3MjYwOA== When I do the same request to my debug service without nginx in the middle, it correctly sends the data, and the http Content-length is an appropriate 186943 bytes (both files are around 90k, so this makes sense) My nginx config is default aside from what I've mentioned here. and the my experimenting to solve this issue: client_max_body_size 2m; client_body_in_file_only on; client_body_in_single_buffer on; client_body_buffer_size 1m; I am not trying to have nginx strip the file attachments, I want them forwarded to the application. Any help would be appreciated. I'm perplexed as to why the file is limited to exactly 80k. Nothing is 80k anywhere in nginx. From nginx-forum at nginx.us Sun Dec 14 22:06:09 2014 From: nginx-forum at nginx.us (blackbic) Date: Sun, 14 Dec 2014 17:06:09 -0500 Subject: nginx & php-fpm [debug] 11: Resource temporarily unavailable In-Reply-To: References: Message-ID: <0938d72311a50fd63d2b677d00b5b47a.NginxMailingListEnglish@forum.nginx.org> Looks like if I disable my security plugin, then the error does not occur, but I still get the 500 error. Looks like my initial issue is related to the import plugin. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255477,255488#msg-255488 From nginx-forum at nginx.us Mon Dec 15 04:20:22 2014 From: nginx-forum at nginx.us (khav) Date: Sun, 14 Dec 2014 23:20:22 -0500 Subject: nginx + php-fpm = file not found In-Reply-To: <7105901c4f5ee2adc2622f592e29bab5.NginxMailingListEnglish@forum.nginx.org> References: <7105901c4f5ee2adc2622f592e29bab5.NginxMailingListEnglish@forum.nginx.org> Message-ID: I fixed it ... but for that i had to change my document root setup when i was using /home/servergreek.com/public_html/www i got "no input file specified"(although the directory existed and had index.php in it).Then i did a server reinstall again but i choose /var/www/servergreek.com/public_html/www . But why the /home/servregreek.com/public_html/www didn't worked? Btw i created both directory as shown (i did no chown on either on them) mkdir -p /var/www/servergreek.com/public_html/www -worked mkdir -p /home/servergreek.com/public_html/www - didn't worked Is an extra step required when using directories outside /var/www ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255466,255491#msg-255491 From francis at daoine.org Mon Dec 15 09:00:35 2014 From: francis at daoine.org (Francis Daly) Date: Mon, 15 Dec 2014 09:00:35 +0000 Subject: nginx + php-fpm = file not found In-Reply-To: References: <7105901c4f5ee2adc2622f592e29bab5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141215090035.GL15670@daoine.org> On Sun, Dec 14, 2014 at 11:20:22PM -0500, khav wrote: Hi there, > mkdir -p /var/www/servergreek.com/public_html/www -worked > mkdir -p /home/servergreek.com/public_html/www - didn't worked > > Is an extra step required when using directories outside /var/www ? Your nginx server runs as once user/group. Your php server runs as one user/group. If you look at the output of each of ls -ld / ls -ld /var ls -ld /var/www ls -ld /var/www/servergreek.com ls -ld /var/www/servergreek.com/public_html ls -ld /var/www/servergreek.com/public_html/www ls -ld /home ls -ld /home/servergreek.com ls -ld /home/servergreek.com/public_html ls -ld /home/servergreek.com/public_html/www are any of them different from the others? Particularly between the "working" and "not working" set, and probably in the last of the ten fields that probably have "drwx" as the first four. If the user/group running nginx is unable to access a file, it will lead to an error. If the user/group running php is unable to access a file, it will lead to an error. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Mon Dec 15 12:45:06 2014 From: nginx-forum at nginx.us (akamatgi) Date: Mon, 15 Dec 2014 07:45:06 -0500 Subject: Issue in ngx_http_parse_request_line Message-ID: Hi, I can see an issue in assigning the port_start and port_end members of ngx_http_request_t inside ngx_http_parse_request_line(). If the request line has a absolute URI with explicit port specified, then port_end is set correctly inside ngx_http_parse_request_line(): ... case sw_port: if (ch >= '0' && ch <= '9') { break; } switch (ch) { case '/': r->port_end = p; r->uri_start = p; state = sw_after_slash_in_uri; break; ... However, r->port_start is still 0. I think the following code should be modified: ... case sw_host_end: r->host_end = p; switch (ch) { case ':': + r->port_start = (p + 1); state = sw_port; break; ... Let me know if it sounds OK. Regards, -anirudh Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255493,255493#msg-255493 From mdounin at mdounin.ru Mon Dec 15 12:52:14 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Dec 2014 15:52:14 +0300 Subject: nginx is eating my client request - multipart/form-data file upload In-Reply-To: References: Message-ID: <20141215125214.GE45960@mdounin.ru> Hello! On Sun, Dec 14, 2014 at 04:43:51PM +0100, Jason H wrote: > I am new to nginx, but am familar with low-level HTTP and > apache. When I try to do a multipart/form file upload, nginx > writes some of the client request body to disk, but never > finishes and it never passes it to the down/upstream script. > > My specific setup is I have nginx with /dyn redirected to > localhost:1337, where a node.js instance is listening. It > works... except for the file upload handler. Also in the config > is a /debug which is redirected to localhost:1338, which goes to > a simple socket dump server for viewing the post. > > I changed the error log handling to 'info'. It reports storing > the client body to a file and when I examine it, it is almost as > I expected: > > --boundary_.oOo._MjM5NzEwOTkxMzU2MjA0NjM5MTQxNDA3MjYwOA== > Content-Type: image/jpeg > Content-Disposition: form-data; name="file"; > filename="dccde7b5-25aa-4bb2-96a6-81e9358f2252.jpg" > > > > The problem with this file is too short, only 81,920 bytes (only > 80k, exactly) when the file is 88,963 bytes, it should be 88,963 > + the header above.... But that is literally only half of it. [...] Could you please provide "nginx -V" output, full config, and a debugging log for a request which demonstrates the problem? See http://wiki.nginx.org/Debugging for some hints. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Mon Dec 15 13:01:50 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 15 Dec 2014 16:01:50 +0300 Subject: Issue in ngx_http_parse_request_line In-Reply-To: References: Message-ID: <30186884.9ij2bX1QCv@vbart-workstation> On Monday 15 December 2014 07:45:06 akamatgi wrote: > Hi, > I can see an issue in assigning the port_start and port_end members of > ngx_http_request_t inside ngx_http_parse_request_line(). > If the request line has a absolute URI with explicit port specified, then > port_end is set correctly inside ngx_http_parse_request_line(): > ... > case sw_port: > if (ch >= '0' && ch <= '9') { > break; > } > > switch (ch) { > case '/': > r->port_end = p; > r->uri_start = p; > state = sw_after_slash_in_uri; > break; > ... > > However, r->port_start is still 0. > I think the following code should be modified: > ... > case sw_host_end: > > r->host_end = p; > > switch (ch) { > case ':': > + r->port_start = (p + 1); > state = sw_port; > break; > ... > > Let me know if it sounds OK. While it may be the right change, please note that r->port_start and r->port_end aren't used anywhere and they are just a dead code. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Mon Dec 15 13:05:19 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Dec 2014 16:05:19 +0300 Subject: Issue in ngx_http_parse_request_line In-Reply-To: References: Message-ID: <20141215130519.GF45960@mdounin.ru> Hello! On Mon, Dec 15, 2014 at 07:45:06AM -0500, akamatgi wrote: > Hi, > I can see an issue in assigning the port_start and port_end members of > ngx_http_request_t inside ngx_http_parse_request_line(). > If the request line has a absolute URI with explicit port specified, then > port_end is set correctly inside ngx_http_parse_request_line(): Both port_start and port_end are stubs and not currently used. -- Maxim Dounin http://nginx.org/ From jhihn at gmx.com Mon Dec 15 15:21:07 2014 From: jhihn at gmx.com (Jason H) Date: Mon, 15 Dec 2014 16:21:07 +0100 Subject: nginx is eating my client request - multipart/form-data file upload In-Reply-To: <20141215125214.GE45960@mdounin.ru> References: , <20141215125214.GE45960@mdounin.ru> Message-ID: As requested. 100k debug log attached. I didn't see anything obviously wrong in the log. Thanks for your help. nginx -V: -------------------------------- nginx version: nginx/1.6.2 built by gcc 4.8.2 20140120 (Red Hat 4.8.2-16) (GCC) TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/var/run/nginx.pid --lock-path=/var/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_spdy_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module --with-http_image_filter_module --with-http_geoip_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-mail_ssl_module --with-pcre --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector --param=ssp-buffer-size=4 -m64 -mtune=generic' --with-ld-opt=' -Wl,-E' ----------------end output nginx.conf:------------------------------ user nginx; worker_processes 1; error_log /var/log/nginx/error.log info; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; keepalive_timeout 65; include /etc/nginx/conf.d/*.conf; index index.html index.htm; # issue testing client_max_body_size 2m; client_body_in_file_only on; client_body_in_single_buffer on; client_body_buffer_size 1m; # end issue testing server { listen 80; server_name dev.tissue-analytics.com localhost; root /usr/share/nginx/html; location / { root /data/www; } location /dyn { proxy_pass http://127.0.0.1:1337; } location /debug { proxy_pass http://127.0.0.1:1338; } error_page 404 /404.html; location = /40x.html { } error_page 500 502 503 504 /50x.html; location = /50x.html { } } } ------------------ end conf > Sent: Monday, December 15, 2014 at 7:52 AM > From: "Maxim Dounin" > To: nginx at nginx.org > Subject: Re: nginx is eating my client request - multipart/form-data file upload > > Hello! > > On Sun, Dec 14, 2014 at 04:43:51PM +0100, Jason H wrote: > > > I am new to nginx, but am familar with low-level HTTP and > > apache. When I try to do a multipart/form file upload, nginx > > writes some of the client request body to disk, but never > > finishes and it never passes it to the down/upstream script. > > > > My specific setup is I have nginx with /dyn redirected to > > localhost:1337, where a node.js instance is listening. It > > works... except for the file upload handler. Also in the config > > is a /debug which is redirected to localhost:1338, which goes to > > a simple socket dump server for viewing the post. > > > > I changed the error log handling to 'info'. It reports storing > > the client body to a file and when I examine it, it is almost as > > I expected: > > > > --boundary_.oOo._MjM5NzEwOTkxMzU2MjA0NjM5MTQxNDA3MjYwOA== > > Content-Type: image/jpeg > > Content-Disposition: form-data; name="file"; > > filename="dccde7b5-25aa-4bb2-96a6-81e9358f2252.jpg" > > > > > > > > The problem with this file is too short, only 81,920 bytes (only > > 80k, exactly) when the file is 88,963 bytes, it should be 88,963 > > + the header above.... But that is literally only half of it. > > [...] > > Could you please provide "nginx -V" output, full config, and a > debugging log for a request which demonstrates the problem? > > See http://wiki.nginx.org/Debugging for some hints. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- A non-text attachment was scrubbed... Name: debug.log Type: text/x-log Size: 105925 bytes Desc: not available URL: From mdounin at mdounin.ru Mon Dec 15 16:54:46 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Dec 2014 19:54:46 +0300 Subject: nginx is eating my client request - multipart/form-data file upload In-Reply-To: References: <20141215125214.GE45960@mdounin.ru> Message-ID: <20141215165446.GI45960@mdounin.ru> Hello! On Mon, Dec 15, 2014 at 04:21:07PM +0100, Jason H wrote: > As requested. 100k debug log attached. I didn't see anything obviously wrong in the log. Some comments about the log below. [...] > 2014/12/15 15:10:37 [debug] 21192#0: *10 http process request line > 2014/12/15 15:10:37 [debug] 21192#0: *10 http request line: "POST /debug/visit/files HTTP/1.1" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http uri: "/debug/visit/files" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http args: "" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http exten: "" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http process request header line > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "Content-Type: multipart/form-data; boundary="boundary_.oOo._MTMyNDQwMjY1MA==MTgwNjg0NzQyMQ==MTM2OTc5MjA="" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "MIME-Version: 1.0" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "Content-Length: 216619" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "Connection: Keep-Alive" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "Accept-Encoding: gzip, deflate" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "Accept-Language: en-US,*" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "User-Agent: Mozilla/5.0" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header: "Host: dev.tissue-analytics.com" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http header done Good so far, Content-Length is 216619 bytes. [...] > 2014/12/15 15:10:37 [debug] 21192#0: *10 http request body content length filter > 2014/12/15 15:10:37 [debug] 21192#0: *10 malloc: 0000000000E7E610:216619 > 2014/12/15 15:10:37 [debug] 21192#0: *10 http read client request body > 2014/12/15 15:10:37 [debug] 21192#0: *10 recv: fd:10 -1 of 216619 > 2014/12/15 15:10:37 [debug] 21192#0: *10 recv() not ready (11: Resource temporarily unavailable) > 2014/12/15 15:10:37 [debug] 21192#0: *10 http client request body recv -2 > 2014/12/15 15:10:37 [debug] 21192#0: *10 http client request body rest 216619 > 2014/12/15 15:10:37 [debug] 21192#0: *10 event timer add: 10: 60000:1418656297413 > 2014/12/15 15:10:37 [debug] 21192#0: *10 http finalize request: -4, "/debug/visit/files?" a:1, c:2 Here nginx started reading the request body. [...] > 2014/12/15 15:10:37 [debug] 21192#0: *10 http run request: "/debug/visit/files?" > 2014/12/15 15:10:37 [debug] 21192#0: *10 http read client request body > 2014/12/15 15:10:37 [debug] 21192#0: *10 recv: fd:10 1424 of 216619 > 2014/12/15 15:10:37 [debug] 21192#0: *10 http client request body recv 1424 > 2014/12/15 15:10:37 [debug] 21192#0: *10 http client request body rest 216619 First 1424 bytes received. [...] > 2014/12/15 15:10:38 [debug] 21192#0: epoll timer: 58359 > 2014/12/15 15:10:38 [debug] 21192#0: epoll: fd:10 ev:0001 d:00007F5631AD21C1 > 2014/12/15 15:10:38 [debug] 21192#0: *10 http run request: "/debug/visit/files?" > 2014/12/15 15:10:38 [debug] 21192#0: *10 http read client request body > 2014/12/15 15:10:38 [debug] 21192#0: *10 recv: fd:10 944 of 119259 > 2014/12/15 15:10:38 [debug] 21192#0: *10 http client request body recv 944 > 2014/12/15 15:10:38 [debug] 21192#0: *10 http client request body rest 216619 And here is the last successful recv(), of 944 bytes. Counting all the recvs in total, nginx was able to recieve 98304 bytes of the body. > 2014/12/15 15:10:38 [debug] 21192#0: *10 recv: fd:10 -1 of 118315 > 2014/12/15 15:10:38 [debug] 21192#0: *10 recv() not ready (11: Resource temporarily unavailable) > 2014/12/15 15:10:38 [debug] 21192#0: *10 http client request body recv -2 > 2014/12/15 15:10:38 [debug] 21192#0: *10 http client request body rest 216619 > 2014/12/15 15:10:38 [debug] 21192#0: *10 event timer: 10, old: 1418656298021, new: 1418656298162 > 2014/12/15 15:10:38 [debug] 21192#0: timer delta: 6 > 2014/12/15 15:10:38 [debug] 21192#0: posted events 0000000000000000 > 2014/12/15 15:10:38 [debug] 21192#0: worker cycle > 2014/12/15 15:10:38 [debug] 21192#0: epoll timer: 58353 As expected, nginx still tries to read the rest of the body, 118315 bytes. [...] > 2014/12/15 15:11:38 [debug] 21192#0: timer delta: 1450 > 2014/12/15 15:11:38 [debug] 21192#0: *10 event timer del: 10: 1418656298021 > 2014/12/15 15:11:38 [debug] 21192#0: *10 http run request: "/debug/visit/files?" > 2014/12/15 15:11:38 [debug] 21192#0: *10 http finalize request: 408, "/debug/visit/files?" a:1, c:1 The client_body_timeout is triggered and the connection is closed. That is, from nginx point of view the client failed to send the request and unexpectedly stopped at some point. It's not clear why this happened, but undersized last packet suggests that it's the client's fault. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Dec 15 17:28:39 2014 From: nginx-forum at nginx.us (jurerickporras) Date: Mon, 15 Dec 2014 12:28:39 -0500 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: <3280a3c67bd0e1fa317d000cb391cd68.NginxMailingListEnglish@forum.nginx.org> References: <20141210172016.GN45960@mdounin.ru> <3280a3c67bd0e1fa317d000cb391cd68.NginxMailingListEnglish@forum.nginx.org> Message-ID: i have the same problem . how to fix this please help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255408,255502#msg-255502 From jhihn at gmx.com Mon Dec 15 17:58:25 2014 From: jhihn at gmx.com (Jason H') Date: Mon, 15 Dec 2014 18:58:25 +0100 Subject: nginx is eating my client request - multipart/form-data file upload Message-ID: An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 15 18:11:37 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Dec 2014 21:11:37 +0300 Subject: nginx with proxy_cache_use_stale not returning from cache when connection refused In-Reply-To: References: <20141210172016.GN45960@mdounin.ru> <3280a3c67bd0e1fa317d000cb391cd68.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141215181137.GJ45960@mdounin.ru> Hello! On Mon, Dec 15, 2014 at 12:28:39PM -0500, jurerickporras wrote: > i have the same problem . how to fix this please help As already explained in this thread, the "fix" is to make sure the response is in cache. If it's there, nginx will return it once configured to do so. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Dec 15 19:48:03 2014 From: nginx-forum at nginx.us (sandeepkolla99) Date: Mon, 15 Dec 2014 14:48:03 -0500 Subject: Efficient CRL checking at Nginx Message-ID: <35d2dddea9bcfac668fce75e86f81402.NginxMailingListEnglish@forum.nginx.org> Hi, I want to check the validity of a client certificate against CRL. So, I have defined in nginx.cong as follows listen 80; listen 443 ssl; server_name localhost; ssl_certificate serverCert.pem; ssl_certificate_key serverKey.key; ssl_client_certificate RootCA.pem; ssl_verify_client on; ssl_verify_depth 2; ssl_crl CrlFile.pem; If I write my nginx.conf as follows, It works fine. My application is expected to process a huge number of requests everyday and for each time(request) client certificate validity is checked against CrlFile.pem (specified at ssl_crl). 1. Does it effect servers response time because each time it has to open and read CrlFile.pem?. My CrlFile.pem will be updated once a day as per my requirement. So, 2. Is there any caching mechanism performed by Nginx to cache CrlFile.pem because It has a new copy only once a day?. 3. Could you please help me in figuring out the best practice for validating client certificate against CRL. Regards, Sandeep Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255509,255509#msg-255509 From ykirpichev at gmail.com Mon Dec 15 20:05:03 2014 From: ykirpichev at gmail.com (Yury Kirpichev) Date: Tue, 16 Dec 2014 00:05:03 +0400 Subject: SPDY: nginx/1.6.2: proxy_pass does not work when https is used Message-ID: Hi, I've got a problem when tried to proxy spdy traffic to host via https protocol. My config is simple like that: location /https/test { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_pass https://www.something.com/test; } When request is performed through HTTP protocol, everything works fine without any problem. However, when incoming request is done through SPDY, there is no response from remote peer in about 10 seconds and connection is closed after that by client. As a short term solution, I've found the following workaround in order to resolve the problem: location /https/test { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_pass http://localhost/internal/https/test; } location /internal/https/test { proxy_set_header X-Real-IP $remote_addr; proxy_pass https://www.something.com/test; } However, in a long term, it would be great to have this problem fixed in nginx and avoid any workaround in config files. BR/ Yury -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 15 20:28:52 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 Dec 2014 23:28:52 +0300 Subject: Efficient CRL checking at Nginx In-Reply-To: <35d2dddea9bcfac668fce75e86f81402.NginxMailingListEnglish@forum.nginx.org> References: <35d2dddea9bcfac668fce75e86f81402.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141215202852.GK45960@mdounin.ru> Hello! On Mon, Dec 15, 2014 at 02:48:03PM -0500, sandeepkolla99 wrote: > Hi, > I want to check the validity of a client certificate against CRL. So, I > have defined in nginx.cong as follows > > listen 80; > listen 443 ssl; > server_name localhost; > ssl_certificate serverCert.pem; > ssl_certificate_key serverKey.key; > ssl_client_certificate RootCA.pem; > ssl_verify_client on; > ssl_verify_depth 2; > ssl_crl CrlFile.pem; > > If I write my nginx.conf as follows, It works fine. My application is > expected to process a huge number of requests everyday and for each > time(request) client certificate validity is checked against CrlFile.pem > (specified at ssl_crl). 1. Does it effect servers response time because > each time it has to open and read CrlFile.pem?. No. The CRL file is loaded into memory when loading a configuration. > My CrlFile.pem will be updated once a day as per my requirement. So, > 2. Is there any caching mechanism performed by Nginx to cache CrlFile.pem > because It has a new copy only once a day?. See above. For changes to be applied, you'll have to reload nginx configuration. -- Maxim Dounin http://nginx.org/ From ben at indietorrent.org Tue Dec 16 00:03:24 2014 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 15 Dec 2014 19:03:24 -0500 Subject: nginx: [emerg] unknown directive "upload_pass" after dist-upgrade from Ubuntu 12.04 LTS to 14.04 LTS In-Reply-To: <2558436.XRGpHQqAf4@vbart-laptop> References: <53F696B3.4090001@indietorrent.org> <4787270.sAYXIEeGid@vbart-laptop> <548CD2BE.4040708@indietorrent.org> <2558436.XRGpHQqAf4@vbart-laptop> Message-ID: <548F76CC.3010707@indietorrent.org> On 12/13/2014 7:10 PM, Valentin V. Bartenev wrote: > On Saturday 13 December 2014 18:58:54 Ben Johnson wrote: > [..] >> Hello, >> >> I apologize for the 4-month delay in responding. :) >> >> In particular, I need to have the ability to track upload progress in a >> manner that is conducive to displaying the percentage complete via >> progress bar. >> >> Is this still possible, absent the defunct module at >> http://wiki.nginx.org/HttpUploadProgressModule ? >> > > You can easily implement it on the browser side using the progress events > of XMLHttpRequest, which is supported by all modern browsers. > > wbr, Valentin V. Bartenev Thank you for the continued assistance, Valentin. With regard to implementing upload progress tracking on the client side, two concerns come to mind: 1.) The inability to resume file uploads (is this possible with a 100%-client-side implementation?) 2.) Progress reporting accuracy, in the event that upload resumption is even possible with a client-side-only implementation I'm dealing with sizable files (in excess of 1GB), and I really need to support upload resumption when an upload attempt fails for any reason. It's not clear whether client-side upload progress tracking would be able to display an accurate progress bar when, say, only 50% of the file must be uploaded during a given attempt. Currently, I am using Nginx upload module (v 2.2.0) with Nginx HTTP Upload Progress module, which, in combination, do everything that I require. I really wish that these modules could be made to work with the newest versions of NGINX. I'm curious if we're talking a couple of small "tweaks", or if these modules would need to be largely rewritten to work with the latest versions of NGINX. Thanks again for sharing your valuable insights. -Ben From vbart at nginx.com Tue Dec 16 00:18:01 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Dec 2014 03:18:01 +0300 Subject: SPDY: nginx/1.6.2: proxy_pass does not work when https is used In-Reply-To: References: Message-ID: <1568239.33izKsruFB@vbart-laptop> On Tuesday 16 December 2014 00:05:03 Yury Kirpichev wrote: > Hi, > > I've got a problem when tried to proxy spdy traffic to host via https > protocol. > > My config is simple like that: > > > location /https/test { > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header Host $host; > > proxy_pass https://www.something.com/test; > > } > > > When request is performed through HTTP protocol, everything works fine > without any problem. > > However, when incoming request is done through SPDY, there is no response > from remote peer in about 10 seconds and connection is closed after that by > client. [..] I can't reproduce the problem with your simple config. It just works in both cases with or w/o SPDY. Could you please provide more information like "nginx -V" output and debug log: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From pchychi at gmail.com Tue Dec 16 01:17:20 2014 From: pchychi at gmail.com (Payam Chychi) Date: Mon, 15 Dec 2014 17:17:20 -0800 Subject: SPDY: nginx/1.6.2: proxy_pass does not work when https is used In-Reply-To: <1568239.33izKsruFB@vbart-laptop> References: <1568239.33izKsruFB@vbart-laptop> Message-ID: <548F8820.5050800@gmail.com> On 2014-12-15, 4:18 PM, Valentin V. Bartenev wrote: > On Tuesday 16 December 2014 00:05:03 Yury Kirpichev wrote: >> Hi, >> >> I've got a problem when tried to proxy spdy traffic to host via https >> protocol. >> >> My config is simple like that: >> >> >> location /https/test { >> >> proxy_set_header X-Real-IP $remote_addr; >> >> proxy_set_header Host $host; >> >> proxy_pass https://www.something.com/test; >> >> } >> >> >> When request is performed through HTTP protocol, everything works fine >> without any problem. >> >> However, when incoming request is done through SPDY, there is no response >> from remote peer in about 10 seconds and connection is closed after that by >> client. > [..] > > I can't reproduce the problem with your simple config. > It just works in both cases with or w/o SPDY. > > Could you please provide more information like "nginx -V" output > and debug log: http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Can you provide logs please? can you see that the connection is being processed by https and packets sent to the remote host? is your local system setup with proper dns (if you are using a hostname)... firewall, anything in your hostdeny? From nginx-forum at nginx.us Tue Dec 16 05:58:26 2014 From: nginx-forum at nginx.us (akamatgi) Date: Tue, 16 Dec 2014 00:58:26 -0500 Subject: Issue in ngx_http_parse_request_line In-Reply-To: <20141215130519.GF45960@mdounin.ru> References: <20141215130519.GF45960@mdounin.ru> Message-ID: <275bb8fe83d4f7726086111f25e19834.NginxMailingListEnglish@forum.nginx.org> Hi Maxim/Valentin, Thanks for your response. Both port_start and port_end are being used by the mod_security nginx module. Failure to set port_start is causing a Segmentation Violation when mod_security is enabled and a request with absolute URI is sent. Thanks, -anirudh Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255493,255516#msg-255516 From anoopalias01 at gmail.com Tue Dec 16 06:00:28 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 16 Dec 2014 11:30:28 +0530 Subject: nginx control panel Message-ID: Hi, One of the problems that nginX had in adoption was the lack of support in mainstream webhosting control panel's which powers a very large chunk of the Internet . I would like to announce http://ndeploy.in/ which can be used to achieve a full nginx control panel for deploying PHP-FPM, PYTHON, RUBY, COLDFUSION , NODEJS ,JSP etc . It works as a module for the popular cPanel control panel as there are lots of stuff we didnt want to replicate in an entirely new control panel and its easier to reach a good userbase with an existing panel . nDeploy has an rpm based installer which works very well in easy install ,upgrade and uninstall of the software. install nDeploy .ShutDown apache and enjoy power of nginX :) -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From ykirpichev at gmail.com Tue Dec 16 08:48:37 2014 From: ykirpichev at gmail.com (Yury Kirpichev) Date: Tue, 16 Dec 2014 11:48:37 +0300 Subject: SPDY: nginx/1.6.2: proxy_pass does not work when https is used In-Reply-To: <1568239.33izKsruFB@vbart-laptop> References: <1568239.33izKsruFB@vbart-laptop> Message-ID: Hi, Here is full config, I tried to make it as small as possible. worker_processes 12; events { worker_connections 8192; use epoll; } http { server { listen [::]:6121 spdy; listen [::]:80; client_body_buffer_size 100k; client_max_body_size 100k; server_name *.maps.dev.yandex.net; location /https/test { proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_pass https://www.google.com/test; } } } Also, I've found that if I comment out line with worker_processes then problem will disappear. Here is output from /usr/sbin/nginx -V nginx version: nginx/1.6.2 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-log-path=/var/log/nginx/access.log --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug --with-http_addition_module --with-http_flv_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_gunzip_module --with-http_image_filter_module --with-http_perl_module --with-http_realip_module --with-http_stub_status_module --with-http_ssl_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-ipv6 --with-sha1=/usr/include/openssl --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-echo --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-headers-more --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-development-kit --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-lua --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-upstream-fair --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-flv-filter --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-ip-tos-filter --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-addtag-exe --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-speedtest --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-eblob --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-request-id --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-favicon --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-auth-sign Unfortunately, I can not collect debug logs right now. However, I did captured tcpdump on server with nginx and was able to see that nginx established SSL connection with google.com, however, after that it got stuck somewhere (can provide logs if you need it) BR/ Yury 2014-12-16 3:18 GMT+03:00 Valentin V. Bartenev : > > On Tuesday 16 December 2014 00:05:03 Yury Kirpichev wrote: > > Hi, > > > > I've got a problem when tried to proxy spdy traffic to host via https > > protocol. > > > > My config is simple like that: > > > > > > location /https/test { > > > > proxy_set_header X-Real-IP $remote_addr; > > > > proxy_set_header Host $host; > > > > proxy_pass https://www.something.com/test; > > > > } > > > > > > When request is performed through HTTP protocol, everything works fine > > without any problem. > > > > However, when incoming request is done through SPDY, there is no response > > from remote peer in about 10 seconds and connection is closed after that > by > > client. > [..] > > I can't reproduce the problem with your simple config. > It just works in both cases with or w/o SPDY. > > Could you please provide more information like "nginx -V" output > and debug log: http://nginx.org/en/docs/debugging_log.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mr.katsan at gmail.com Tue Dec 16 10:04:42 2014 From: mr.katsan at gmail.com (Mr Katsan) Date: Tue, 16 Dec 2014 12:04:42 +0200 Subject: proxy whitelist, exclude domain Message-ID: Hello! My nginx is configured as proxy and I want to know how to exclude specific domain from proxy_pass. Currently I'm using 'if', any other ideas? config - http://p.ngx.cc/654d77baf333aeb2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Dec 16 10:09:21 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Dec 2014 13:09:21 +0300 Subject: SPDY: nginx/1.6.2: proxy_pass does not work when https is used In-Reply-To: References: <1568239.33izKsruFB@vbart-laptop> Message-ID: <4523122.zaCW3QYTN5@vbart-laptop> On Tuesday 16 December 2014 11:48:37 Yury Kirpichev wrote: > worker_processes 12; > > events { > worker_connections 8192; > use epoll; > } > > http { > server { > listen [::]:6121 spdy; > listen [::]:80; > > client_body_buffer_size 100k; > client_max_body_size 100k; > > server_name *.maps.dev.yandex.net; > > location /https/test { > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header Host $host; > proxy_pass https://www.google.com/test; > } > } > } I've tested exactly with this config and see no problem: % ./spdycat --spdy3-1 --no-tls 'http://[::1]:6121/https/test' -v [ 0.000] Handshake complete [ 0.000] send SYN_STREAM frame (stream_id=1, assoc_stream_id=0, pri=3) :host: [::1]:6121 :method: GET :path: /https/test :scheme: http :version: HTTP/1.1 accept: */* accept-encoding: gzip, deflate user-agent: spdylay/1.2.3 [ 0.001] recv SETTINGS frame (niv=2) [4(0):100] [7(0):2147483647] [ 0.001] recv WINDOW_UPDATE frame (stream_id=0, delta_window_size=2147418111) [ 0.459] recv SYN_REPLY frame (stream_id=1) :status: 404 Not Found :version: HTTP/1.1 content-length: 1429 content-type: text/html; charset=UTF-8 date: Tue, 16 Dec 2014 09:59:46 GMT server: nginx/1.6.2 Error 404 (Not Found)!!1

404. That?s an error.

The requested URL /test was not found on this server. That?s all we know. [ 0.459] recv DATA frame (stream_id=1, flags=1, length=1429) [ 0.459] send GOAWAY frame (last_good_stream_id=0) > > Also, I've found that if I comment out line with worker_processes then > problem will disappear. > > Here is output from /usr/sbin/nginx -V > nginx version: nginx/1.6.2 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --conf-path=/etc/nginx/nginx.conf > --error-log-path=/var/log/nginx/error.log > --http-client-body-temp-path=/var/lib/nginx/body > --http-fastcgi-temp-path=/var/lib/nginx/fastcgi > --http-log-path=/var/log/nginx/access.log > --http-proxy-temp-path=/var/lib/nginx/proxy > --http-scgi-temp-path=/var/lib/nginx/scgi > --http-uwsgi-temp-path=/var/lib/nginx/uwsgi > --lock-path=/var/lock/nginx.lock --pid-path=/var/run/nginx.pid --with-debug > --with-http_addition_module --with-http_flv_module --with-http_dav_module > --with-http_geoip_module --with-http_gzip_static_module > --with-http_gunzip_module --with-http_image_filter_module > --with-http_perl_module --with-http_realip_module > --with-http_stub_status_module --with-http_ssl_module > --with-http_spdy_module --with-http_sub_module --with-http_xslt_module > --with-ipv6 --with-sha1=/usr/include/openssl > --with-md5=/usr/include/openssl --with-mail --with-mail_ssl_module > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-echo > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-headers-more > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-development-kit > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-lua > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-upstream-fair > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-flv-filter > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-ip-tos-filter > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-addtag-exe > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-speedtest > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-eblob > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-request-id > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-favicon > --add-module=/home/buildfarm/teamcity/projects/nginx-stable/debian/modules/nginx-auth-sign > The problem can be in one of these 3-rd party modules. You should try without them first. wbr, Valentin V. Bartenev From vbart at nginx.com Tue Dec 16 10:18:44 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Dec 2014 13:18:44 +0300 Subject: Issue in ngx_http_parse_request_line In-Reply-To: <275bb8fe83d4f7726086111f25e19834.NginxMailingListEnglish@forum.nginx.org> References: <20141215130519.GF45960@mdounin.ru> <275bb8fe83d4f7726086111f25e19834.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2851021.1Q9xj8CXJC@vbart-laptop> On Tuesday 16 December 2014 00:58:26 akamatgi wrote: > Hi Maxim/Valentin, > Thanks for your response. > Both port_start and port_end are being used by the mod_security nginx > module. > Failure to set port_start is causing a Segmentation Violation when > mod_security is enabled and a request with absolute URI is sent. Well, that means that mod_security need to be fixed. And it already is: https://github.com/SpiderLabs/ModSecurity/commit/33b8760e87b7441142a431175d5b459245551314 wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Dec 16 10:36:38 2014 From: nginx-forum at nginx.us (talkingnews) Date: Tue, 16 Dec 2014 05:36:38 -0500 Subject: fastcgi_cache_purge line in config causes segmentation fault since 1.7.8 in nginx-extras Message-ID: Hello; for many months I've been running the config shown here: https://rtcamp.com/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/ This morning, I apt-get upgraded and got nginx 1.7.8 and a load of dead sites. running nginx -t gave me a segmentation fault (core dumped) error. After a whole load of panicking and commenting one line out at a time, I finally tracked it down to this section: location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; } If I comment that out, I can start up wordpress and nginx -t doesn't core dump. I can't see anything obvious in the release notes that would cause this, server details are below: Linux 3.16.0-28-generic #37-Ubuntu SMP Mon Dec 8 17:22:00 UTC 2014 i686 i686 i686 GNU/Linux nginx version: nginx/1.7.8 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -Wl,-z,relro' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_flv_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_mp4_module --with-http_perl_module --with-http_random_index_module --with-http_secure_link_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.7.8/debian/modules/headers-more-nginx-module --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-cache-purge --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-development-kit --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.7.8/debian/modules/ngx-fancyindex --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-http-push --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-lua --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-upload-progress --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.7.8/debian/modules/ngx_http_substitutions_filter_module Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255522,255522#msg-255522 From vbart at nginx.com Tue Dec 16 10:47:53 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 16 Dec 2014 13:47:53 +0300 Subject: fastcgi_cache_purge line in config causes segmentation fault since 1.7.8 in nginx-extras In-Reply-To: References: Message-ID: <6467993.I48coaCffl@vbart-laptop> On Tuesday 16 December 2014 05:36:38 talkingnews wrote: > Hello; > > for many months I've been running the config shown here: > https://rtcamp.com/wordpress-nginx/tutorials/single-site/fastcgi-cache-with-purging/ > > This morning, I apt-get upgraded and got nginx 1.7.8 and a load of dead > sites. running nginx -t gave me a segmentation fault (core dumped) error. > > After a whole load of panicking and commenting one line out at a time, I > finally tracked it down to this section: > > location ~ /purge(/.*) { > fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; > } > > If I comment that out, I can start up wordpress and nginx -t doesn't core > dump. > > I can't see anything obvious in the release notes that would cause this, [..] https://github.com/FRiCKLE/ngx_cache_purge/blob/master/CHANGES Since you're using a lot of 3rd-party modules it's a good idea to update them too. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue Dec 16 11:21:29 2014 From: nginx-forum at nginx.us (talkingnews) Date: Tue, 16 Dec 2014 06:21:29 -0500 Subject: fastcgi_cache_purge line in config causes segmentation fault since 1.7.8 in nginx-extras In-Reply-To: <6467993.I48coaCffl@vbart-laptop> References: <6467993.I48coaCffl@vbart-laptop> Message-ID: <2a398954e66cef8857c0254824845558.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > https://github.com/FRiCKLE/ngx_cache_purge/blob/master/CHANGES > > Since you're using a lot of 3rd-party modules it's a good idea to > update them too. Not sure how I can do this when I'm using http://ppa.launchpad.net/nginx/development/ubuntu utopic main As far as I can see, the nginx-extras package was built 15 hours ago. >From the link you posted: 2014-12-02 VERSION 2.2 * Fix compatibility with nginx-1.7.8+. >From nginx -V --add-module=/build/buildd/nginx-1.7.8/debian/modules/nginx-cache-purge So, it SHOULD be up to date. It sounds like I need to post a ticket on github - so I just have! https://github.com/FRiCKLE/ngx_cache_purge/issues/25 Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255522,255524#msg-255524 From smntov at gmail.com Tue Dec 16 11:40:43 2014 From: smntov at gmail.com (ST) Date: Tue, 16 Dec 2014 13:40:43 +0200 Subject: Problems with nginx-extras on Debian Wheezy Message-ID: <1418730043.11763.208.camel@debox> Hi, I have installed nginx-extras on Debian Wheezy. nginx -V shows: --with-http_mp4_module , but if I write mp4; in location / {} it says on restart: nginx: [emerg] unknown directive " mp4" . the same with ssl_protocols directive which I put in nginx.conf - http {} ... Why? Here some pastes: [ http://p.ngx.cc/4f ] - sites-available [ http://p.ngx.cc/f7 ] - nginx.conf [ http://p.ngx.cc/dd ] - nginx -V Thank you From nginx-forum at nginx.us Tue Dec 16 11:51:47 2014 From: nginx-forum at nginx.us (talkingnews) Date: Tue, 16 Dec 2014 06:51:47 -0500 Subject: fastcgi_cache_purge line in config causes segmentation fault since 1.7.8 in nginx-extras In-Reply-To: <2a398954e66cef8857c0254824845558.NginxMailingListEnglish@forum.nginx.org> References: <6467993.I48coaCffl@vbart-laptop> <2a398954e66cef8857c0254824845558.NginxMailingListEnglish@forum.nginx.org> Message-ID: For anyone following this, I had a reply on the repo bug ticket saying: > I just downloaded nginx_1.7.8-1+utopic1.debian.tar.gz sources from PPA and it looks that unfortunately it doesn't use the latest version of the module. Please contact PPA package maintainer (teward) and ask him to update it. So, I've just done that - I'll let you know what the outcome is - thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255522,255526#msg-255526 From smntov at gmail.com Tue Dec 16 11:58:40 2014 From: smntov at gmail.com (ST) Date: Tue, 16 Dec 2014 13:58:40 +0200 Subject: Problems with nginx-extras on Debian Wheezy Message-ID: <1418731120.11763.211.camel@debox> good people from the IRC channel (thresh) solved the problem: I was copy/pasting the config lines from GoogleDocs and it seemingly has problematic chars for spaces which nginx didn't like... Thank you! > Hi, > I have installed nginx-extras on Debian Wheezy. nginx -V shows: > --with-http_mp4_module , but if I write mp4; in location / {} it says > on > restart: nginx: [emerg] unknown directive " mp4" . the > same with ssl_protocols directive which I put in nginx.conf - http > {} ... Why? > > Here some pastes: > [ http://p.ngx.cc/4f ] - sites-available > [ http://p.ngx.cc/f7 ] - nginx.conf > [ http://p.ngx.cc/dd ] - nginx -V > > Thank you From nginx-forum at nginx.us Tue Dec 16 13:23:18 2014 From: nginx-forum at nginx.us (akamatgi) Date: Tue, 16 Dec 2014 08:23:18 -0500 Subject: Issue in ngx_http_parse_request_line In-Reply-To: <2851021.1Q9xj8CXJC@vbart-laptop> References: <2851021.1Q9xj8CXJC@vbart-laptop> Message-ID: Thanks Valentin for digging out the mod_security fix. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255493,255528#msg-255528 From nginx-forum at nginx.us Tue Dec 16 14:12:39 2014 From: nginx-forum at nginx.us (umarizal) Date: Tue, 16 Dec 2014 09:12:39 -0500 Subject: all visitor have same IP (my server IP) In-Reply-To: <434257e40d9a1e57d71ed197a4935876.NginxMailingListEnglish@forum.nginx.org> References: <434257e40d9a1e57d71ed197a4935876.NginxMailingListEnglish@forum.nginx.org> Message-ID: activa Wrote: ------------------------------------------------------- > i have added the setting to the nginx.conf . > > but still i show more than 1500 connexion from the server ip . Do you solve the problem? I ask because I am having the same problem. After installing Nginx, all visitors are marked with the same server IP instead of your real IP in scripts as phpBB and MyBB for example. I tried adding the lines reported by our friend Ruslan, in Nginx config file, but unfortunately it did not work. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,225160,255529#msg-255529 From anoopalias01 at gmail.com Tue Dec 16 14:36:27 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 16 Dec 2014 20:06:27 +0530 Subject: all visitor have same IP (my server IP) In-Reply-To: References: <434257e40d9a1e57d71ed197a4935876.NginxMailingListEnglish@forum.nginx.org> Message-ID: Are you seeing the same IP in apache?. If yes then you need the following module in apache https://github.com/gnif/mod_rpaf or http://httpd.apache.org/docs/trunk/mod/mod_remoteip.html -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 16 16:26:48 2014 From: nginx-forum at nginx.us (talkingnews) Date: Tue, 16 Dec 2014 11:26:48 -0500 Subject: fastcgi_cache_purge line in config causes segmentation fault since 1.7.8 in nginx-extras In-Reply-To: References: <6467993.I48coaCffl@vbart-laptop> <2a398954e66cef8857c0254824845558.NginxMailingListEnglish@forum.nginx.org> Message-ID: Right, sorted! https://bugs.launchpad.net/bugs/1403054 New nginx-extras mainline is now available to fix the bug - my good deed for the day :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255522,255534#msg-255534 From nginx-forum at nginx.us Tue Dec 16 17:32:17 2014 From: nginx-forum at nginx.us (umarizal) Date: Tue, 16 Dec 2014 12:32:17 -0500 Subject: all visitor have same IP (my server IP) In-Reply-To: References: Message-ID: <0fbc12e7ec6eef6daa291b936c53c52a.NginxMailingListEnglish@forum.nginx.org> Anoop Alias Wrote: ------------------------------------------------------- > Are you seeing the same IP in apache?. > > If yes then you need the following module in apache > https://github.com/gnif/mod_rpaf or > http://httpd.apache.org/docs/trunk/mod/mod_remoteip.html > > -- > *Anoop P Alias* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Sorry, but I don?t understand your question. How do I see it? Thank you very much! :D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,225160,255538#msg-255538 From nginx-forum at nginx.us Tue Dec 16 17:51:56 2014 From: nginx-forum at nginx.us (sandeepkolla99) Date: Tue, 16 Dec 2014 12:51:56 -0500 Subject: Efficient CRL checking at Nginx In-Reply-To: <20141215202852.GK45960@mdounin.ru> References: <20141215202852.GK45960@mdounin.ru> Message-ID: Hi Maxim, Thanks for your help on this issue. I get new crl file everyday. Do we need to reload the whole Nginx conf?. Is there any way to reload only crl file?. Regards, Sandeep Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255509,255539#msg-255539 From pchychi at gmail.com Tue Dec 16 17:53:59 2014 From: pchychi at gmail.com (Payam Chychi) Date: Tue, 16 Dec 2014 09:53:59 -0800 Subject: all visitor have same IP (my server IP) In-Reply-To: <0fbc12e7ec6eef6daa291b936c53c52a.NginxMailingListEnglish@forum.nginx.org> References: <0fbc12e7ec6eef6daa291b936c53c52a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <549071B7.3070209@gmail.com> On 2014-12-16, 9:32 AM, umarizal wrote: > Anoop Alias Wrote: > ------------------------------------------------------- >> Are you seeing the same IP in apache?. >> >> If yes then you need the following module in apache >> https://github.com/gnif/mod_rpaf or >> http://httpd.apache.org/docs/trunk/mod/mod_remoteip.html >> >> -- >> *Anoop P Alias* >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > Sorry, but I don?t understand your question. How do I see it? > > Thank you very much! :D > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,225160,255538#msg-255538 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx There are multiple areas/ways of getting the non-proxied ip address. One of Anoop recommendation was that you utilize the remoteip method. This isn't a but, its simply the fact that you are proxying the connection so the connections will in fact look to come from the address of your nginx server. What the remoteip (realip) allows nginx to capture the src-Ip address of the connection and then pass it to apache, then you configure apache to look for say "remote_addr" and utilize the value for whatever you are doing. this isn't perfect but it works for my testing, allows me to see how the data is being passed. "; echo "X Forward: " . $_SERVER['HTTP_X_FORWARDED_FOR']."
"; echo "X Forward single: " . $_SERVER['HTTP_X_FORWARDED']."
"; echo "HTTP_X_CLUSTER_CLIENT_IP: " . $_SERVER['HTTP_X_CLUSTER_CLIENT_IP']."
"; echo "HTTP_FORWARDED_FOR " . $_SERVER['HTTP_FORWARDED_FOR']."
"; echo "HTTP_FORWARDED " . $_SERVER['HTTP_FORWARDED']."
"; echo "Clien IP: " . $_SERVER['HTTP_CLIENT_IP']."
"; ?> hope this helps From nginx-forum at nginx.us Tue Dec 16 19:01:05 2014 From: nginx-forum at nginx.us (badtzhou) Date: Tue, 16 Dec 2014 14:01:05 -0500 Subject: max_size on proxy_cache_path not working correctly Message-ID: <64bc0877f414765e046baa01a8f9b210.NginxMailingListEnglish@forum.nginx.org> I am having problem with max_size setting on proxy_cache_path. I trying to set a limit on disk caching space that can be used by nginx. It works initially, but after a while the disk space used by nginx will grow much larger than max_size limit that I set. Feels like nginx cache manager die or stop tracking the caching for some reason. What could possibly caused this? Is there a limit on how many caching object you can have? I am running nginx version 1.7.6 Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255543,255543#msg-255543 From nginx-forum at nginx.us Wed Dec 17 06:14:17 2014 From: nginx-forum at nginx.us (khav) Date: Wed, 17 Dec 2014 01:14:17 -0500 Subject: Users not able to watch videos greater than 10 mins Message-ID: My site does video streaming and users are not able to play videos greater than 10 mins.After 10-11 mins flowplayer stop playing the video but i don't get any error either by php/nginx/flowplayer Is there anything wrong with my fastcgi buffers fastcgi_pass unix:/tmp/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_connect_timeout 60; fastcgi_send_timeout 300; fastcgi_read_timeout 300; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_max_temp_file_size 0; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255549,255549#msg-255549 From nginx-forum at nginx.us Wed Dec 17 11:15:28 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 17 Dec 2014 06:15:28 -0500 Subject: [ANN] Windows nginx 1.7.9.1 Gryphon Message-ID: <37cb270a98b782ff398be63208388d55.NginxMailingListEnglish@forum.nginx.org> 12:00 17-12-2014 nginx 1.7.9.1 Gryphon Based on nginx 1.7.9 (12-12-2014, last changeset 5945:99751fe3bc3b) with; + win32 file properties + nginx-http-concat v1.2.2 (https://github.com/alibaba/nginx-http-concat) + prove04.zip (onsite), a Windows Test_Suite (updated 7-12-2014) + cache_purge v2.2 (upgraded 4-12-2014) + lua-nginx-module v0.9.13 (upgraded 12-12-2014) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' * This is the last scheduled release for 2014, have a great xmas and see ya'all in 2015 ! Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255551,255551#msg-255551 From martijn at vmaurik.nl Wed Dec 17 12:05:23 2014 From: martijn at vmaurik.nl (martijn at vmaurik.nl) Date: Wed, 17 Dec 2014 12:05:23 +0000 Subject: Boringssl + Nginx 1.8.7 Message-ID: Hi, I am trying to compile boringssl against nginx. I've got an error while compiling: export NGINX_VERSION 1.7.8 export MODULESDIR /usr/src/nginx-modules export NPS_VERSION 1.9.32.2 I run ./configure: ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-file-aio --with-ipv6 --with-http_ssl_module --with-http_spdy_module --with-cc-opt="-I ../boringssl/.openssl/include/ -g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2" --with-ld-opt="-L ../boringssl/.openssl/lib -Wl,-Bsymbolic-functions -Wl,-z,relro -Wl,--as-needed" --add-module=${MODULESDIR}/ngx_pagespeed-release-${NPS_VERSION}-beta --add-module=${MODULESDIR}/ngx_http_enhanced_memcached_module --add-module=${MODULESDIR}/headers-more-nginx-module The error which I get is src/event/ngx_event_openssl.c: In function 'ngx_ssl_handshake': src/event/ngx_event_openssl.c:1090:46: error: 'SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS' undeclared (first use in this function) ? ? ? ? ? ? ?c->ssl->connection->s3->flags |= SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS; ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ? ^ src/event/ngx_event_openssl.c:1090:46: note: each undeclared identifier is reported only once for each function it appears in make[1]: *** [objs/src/event/ngx_event_openssl.o] Error 1 make[1]: Leaving directory `/usr/src/nginx-1.7.8'? What do I do wrong? -------------- next part -------------- An HTML attachment was scrubbed... URL: From luky-37 at hotmail.com Wed Dec 17 13:06:47 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 17 Dec 2014 14:06:47 +0100 Subject: Boringssl + Nginx 1.8.7 In-Reply-To: References: Message-ID: > Hi, > > I am trying to compile boringssl against nginx. > I've got an error while compiling: This is due to: https://boringssl.googlesource.com/boringssl/+/e319a2f73a30147ae118190397a558b8a2a24733%5E%21/ Can you try the attached patch against nginx which safeguards SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS? Lukas -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-safeguard-SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS.diff Type: application/octet-stream Size: 544 bytes Desc: not available URL: From black.fledermaus at arcor.de Wed Dec 17 13:49:32 2014 From: black.fledermaus at arcor.de (basti) Date: Wed, 17 Dec 2014 14:49:32 +0100 Subject: Users not able to watch videos greater than 10 mins In-Reply-To: References: Message-ID: <549189EC.1080105@arcor.de> Hello, is somethink relevant to this in the nginx/ php error log? On 17.12.2014 07:14, khav wrote: > My site does video streaming and users are not able to play videos greater > than 10 mins.After 10-11 mins flowplayer stop playing the video but i don't > get any error either by php/nginx/flowplayer > Is there anything wrong with my fastcgi buffers > > fastcgi_pass unix:/tmp/php5-fpm.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_connect_timeout 60; > fastcgi_send_timeout 300; > fastcgi_read_timeout 300; > fastcgi_buffer_size 128k; > fastcgi_buffers 256 16k; > fastcgi_busy_buffers_size 256k; > fastcgi_max_temp_file_size 0; > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255549,255549#msg-255549 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From martijn at vmaurik.nl Wed Dec 17 14:10:20 2014 From: martijn at vmaurik.nl (martijn at vmaurik.nl) Date: Wed, 17 Dec 2014 14:10:20 +0000 Subject: Boringssl + Nginx 1.8.7 In-Reply-To: References: Message-ID: Works thanks December 17 2014 2:07 PM, "Lukas Tribus" wrote: >> Hi, >> >> I am trying to compile boringssl against nginx. >> I've got an error while compiling: > > This is due to: > https://boringssl.googlesource.com/boringssl/+/e319a2f73a30147ae118190397a558b8a2a24733%5E%21 > > Can you try the attached patch against nginx which > safeguards SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS? > > Lukas > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Wed Dec 17 14:15:54 2014 From: nginx-forum at nginx.us (grydan) Date: Wed, 17 Dec 2014 09:15:54 -0500 Subject: Nginx case insensitive URL and rewrite URL? Message-ID: I have Nginx configured as reverse proxy server { listen 80; server_name www.pluto.com; location / { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://127.0.0.1:8080; } } I need that the URL request from any combination of FOLDER1 (case insensitive) is rewrite from URL http://www.pippo.com/FOLDER1/etc..etc..etc.. to (always lowercase folder1) http://12.0.0.1/folder1/etc..etc..etc... where etc..etc..etc. = anything that I need to keep How can I do? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255565,255565#msg-255565 From nginx-forum at nginx.us Wed Dec 17 15:03:23 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 17 Dec 2014 10:03:23 -0500 Subject: Nginx case insensitive URL and rewrite URL? In-Reply-To: References: Message-ID: <507be11e97e7f978bda0e7b1cfea63ed.NginxMailingListEnglish@forum.nginx.org> https://github.com/replay/ngx_http_lower_upper_case would be the easiest solution. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255565,255570#msg-255570 From nginx-forum at nginx.us Wed Dec 17 15:23:36 2014 From: nginx-forum at nginx.us (grydan) Date: Wed, 17 Dec 2014 10:23:36 -0500 Subject: Nginx case insensitive URL and rewrite URL? In-Reply-To: <507be11e97e7f978bda0e7b1cfea63ed.NginxMailingListEnglish@forum.nginx.org> References: <507be11e97e7f978bda0e7b1cfea63ed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2302b412a8493bac7fe592d2d19fcec5.NginxMailingListEnglish@forum.nginx.org> How can I use this module? I'm a newbie of NGINX tnx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255565,255572#msg-255572 From nginx-forum at nginx.us Wed Dec 17 15:37:13 2014 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 17 Dec 2014 10:37:13 -0500 Subject: Nginx case insensitive URL and rewrite URL? In-Reply-To: <2302b412a8493bac7fe592d2d19fcec5.NginxMailingListEnglish@forum.nginx.org> References: <507be11e97e7f978bda0e7b1cfea63ed.NginxMailingListEnglish@forum.nginx.org> <2302b412a8493bac7fe592d2d19fcec5.NginxMailingListEnglish@forum.nginx.org> Message-ID: grydan Wrote: ------------------------------------------------------- > How can I use this module? Re-compile like " --add-module=objs/lib/ngx_http_lower_upper_case" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255565,255575#msg-255575 From mdounin at mdounin.ru Wed Dec 17 15:47:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Dec 2014 18:47:20 +0300 Subject: Efficient CRL checking at Nginx In-Reply-To: References: <20141215202852.GK45960@mdounin.ru> Message-ID: <20141217154720.GQ45960@mdounin.ru> Hello! On Tue, Dec 16, 2014 at 12:51:56PM -0500, sandeepkolla99 wrote: > Hi Maxim, > > Thanks for your help on this issue. I get new crl file everyday. Do we > need to reload the whole Nginx conf?. Is there any way to reload only crl > file?. Yes, you have to reload thw whole nginx config. There is no way to reload only CRL file. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 17 15:47:59 2014 From: nginx-forum at nginx.us (grydan) Date: Wed, 17 Dec 2014 10:47:59 -0500 Subject: Nginx case insensitive URL and rewrite URL? In-Reply-To: References: <507be11e97e7f978bda0e7b1cfea63ed.NginxMailingListEnglish@forum.nginx.org> <2302b412a8493bac7fe592d2d19fcec5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9606b458abcb14719b8a25c66f48c8e4.NginxMailingListEnglish@forum.nginx.org> ok, but after that how can I use in my scenario this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255565,255576#msg-255576 From nginx-forum at nginx.us Wed Dec 17 16:18:05 2014 From: nginx-forum at nginx.us (sandeepkolla99) Date: Wed, 17 Dec 2014 11:18:05 -0500 Subject: Efficient CRL checking at Nginx In-Reply-To: <20141217154720.GQ45960@mdounin.ru> References: <20141217154720.GQ45960@mdounin.ru> Message-ID: <23ac810b9b4d0f3f03fc8ffa9743aca3.NginxMailingListEnglish@forum.nginx.org> Thank you very much for your help on this. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255509,255580#msg-255580 From francis at daoine.org Wed Dec 17 17:00:32 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 17 Dec 2014 17:00:32 +0000 Subject: Nginx case insensitive URL and rewrite URL? In-Reply-To: References: Message-ID: <20141217170032.GM15670@daoine.org> On Wed, Dec 17, 2014 at 09:15:54AM -0500, grydan wrote: Hi there, > I need that the URL request from any combination of FOLDER1 (case > insensitive) is rewrite from URL > > http://www.pippo.com/FOLDER1/etc..etc..etc.. > to (always lowercase folder1) > > http://12.0.0.1/folder1/etc..etc..etc... > where etc..etc..etc. = anything that I need to keep Use a location{} that matches these requests, and give proxy_pass the full uri. Something like location ~* ^/folder1/(.*) { proxy_pass http://127.0.0.1:8080/folder1/$1$is_args$args; } although you probably want more directives in there too. Note that, while this does work in a test system, it might be contrary to the documentation at http://nginx.org/r/proxy_pass, so it may not work forever. I think that the fact that the regex matches the complete request uri probably means that it is ok, though. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Dec 17 17:06:57 2014 From: nginx-forum at nginx.us (rthur) Date: Wed, 17 Dec 2014 12:06:57 -0500 Subject: Default ssl server and sni Message-ID: I have a bunch of https websites available over a single IP working with sni on nginx 1.0.15. Currently, anyone accessing a domain name that resolves to the same IP is greeted with a certificate mismatch error due to nginx choosing the first server as the default. Instead of using the first server as the default, I'd like to create a catch-all https server that drops/resets the tcp connection. As such all domain names that have an associated server block would still work using sni, but IPs or other domain names would simply result in a dropped connection. Unfortunately, I can't seem to get this to work. If I define the server block below, all requests are handled by the catch-all server, and all the websites become inaccessible. Here is the server block I've defined: server { listen 443 default_server; return 443; } Does anyone know how I could achieve this? Thanks! Arthur Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255583,255583#msg-255583 From mdounin at mdounin.ru Wed Dec 17 20:45:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 17 Dec 2014 23:45:12 +0300 Subject: max_size on proxy_cache_path not working correctly In-Reply-To: <64bc0877f414765e046baa01a8f9b210.NginxMailingListEnglish@forum.nginx.org> References: <64bc0877f414765e046baa01a8f9b210.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141217204512.GU45960@mdounin.ru> Hello! On Tue, Dec 16, 2014 at 02:01:05PM -0500, badtzhou wrote: > I am having problem with max_size setting on proxy_cache_path. I trying to > set a limit on disk caching space that can be used by nginx. > It works initially, but after a while the disk space used by nginx will grow > much larger than max_size limit that I set. > Feels like nginx cache manager die or stop tracking the caching for some > reason. You may want to try looking into logs and checking if the cache manager process is running (and if it is - what it's doing). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 17 22:55:44 2014 From: nginx-forum at nginx.us (tom123) Date: Wed, 17 Dec 2014 17:55:44 -0500 Subject: limit_conn working for websocket proxy? Message-ID: Hi Nginx experts, i am looking for a way to limit connections per ip for proxied websockets through nginx. I found the ngx_http_limit_conn_module settings in the manual. But i am not sure if this works also for proxied websockets. Does anyone have a suggestion? I am on the right track? http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html?&_ga=1.253811912.159375082.1418856428#limit_conn Thanks a lot for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255605,255605#msg-255605 From vbart at nginx.com Wed Dec 17 23:12:54 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 18 Dec 2014 02:12:54 +0300 Subject: limit_conn working for websocket proxy? In-Reply-To: References: Message-ID: <3130845.ycj5cZjS9b@vbart-laptop> On Wednesday 17 December 2014 17:55:44 tom123 wrote: > Hi Nginx experts, > > i am looking for a way to limit connections per ip for proxied websockets > through nginx. I found the ngx_http_limit_conn_module settings in the > manual. But i am not sure if this works also for proxied websockets. Does > anyone have a suggestion? I am on the right track? > [..] Yes, it works. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Dec 18 01:02:47 2014 From: nginx-forum at nginx.us (tatroc) Date: Wed, 17 Dec 2014 20:02:47 -0500 Subject: 499 error reverse proxy Message-ID: <2121859c28383f8203de19b5d30e9663.NginxMailingListEnglish@forum.nginx.org> I folks, wondering if I could get your input. I have a nginx 1.7.3 configured as a reverse proxy. The system seems to work fine when not under heavy load. However, there are periods of time when we have extreme load because of ecom promotions. The $request_time gets up to 15 seconds. Normally the $request_time is only a few seconds. The client machine is making a web service call to the Nginx reverse proxy and the client web service making the call is designed to abort the connection if it takes longer than 15 seconds. As a result that is why you see all the $request_time's stop at 15 seconds. The $upstream_addr field is populated with ?-?. What would cause the $request_time to take longer than 15 seconds? It seems the nginx reverse proxy never passes the connection on to the upstream server. 10.8.165.116 - - [13/Dec/2014:07:58:12 -0600] "POST /Common/service/GateService/GateService.svc HTTP/1.1" 499 0 "-" "JAX-WS RI 2.1.6 in JDK 6" "-" LB_req_Time: 15.000 WebSrv_rspTime: - Req_size: 8292 HTTP_content_size: 7875 10.8.165.112 - - [13/Dec/2014:07:58:13 -0600] "POST /Common/service/GateService/GateService.svc HTTP/1.1" 499 0 "-" "JAX-WS RI 2.1.6 in JDK 6" "-" LB_req_Time: 14.814 WebSrv_rspTime: - Req_size: 12024 HTTP_content_size: 11606 10.8.165.113 - - [13/Dec/2014:07:58:13 -0600] "POST /Common/service/GateService/GateService.svc HTTP/1.1" 499 0 "-" "JAX-WS RI 2.1.6 in JDK 6" "-" LB_req_Time: 14.808 WebSrv_rspTime: - Req_size: 6230 HTTP_content_size: 5813 10.8.165.117 - - [13/Dec/2014:07:58:13 -0600] "POST /Common/service/GateService/GateService.svc HTTP/1.1" 499 0 "-" "JAX-WS RI 2.1.6 in JDK 6" "-" LB_req_Time: 15.000 WebSrv_rspTime: - Req_size: 6249 HTTP_content_size: 5832 I have done some tuning on the kernel params, here they are: net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_max_orphans = 60000 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 net.ipv4.icmp_echo_ignore_broadcasts = 1 kernel.exec-shield = 1 kernel.randomize_va_space = 1 net.ipv4.icmp_ignore_bogus_error_responses = 1 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.all.rp_filter = 1 net.ipv4.conf.all.accept_source_route = 0 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.conf.all.secure_redirects = 0 net.ipv4.conf.default.accept_redirects = 0 net.ipv4.conf.default.secure_redirects = 0 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.tcp_window_scaling = 1 net.core.somaxconn = 65535 net.core.netdev_max_backlog = 16384 net.ipv4.tcp_max_syn_backlog = 65536 net.ipv4.tcp_max_tw_buckets = 1440000 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.ipv4.tcp_congestion_control = cubic fs.file-max = 3000000 net.ipv4.tcp_slow_start_after_idle = 0 net.ipv4.tcp_fin_timeout = 15 ###########nginx.conf############ user nginx; worker_processes 4; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; worker_rlimit_nofile 40960; events { use epoll; # accept X connections per worker process worker_connections 10240; # accept more than 1 connection at a time multi_accept on; } timer_resolution 500ms; ######### server config ################## upstream secureProd { #sticky; least_conn; server server1:443 weight=10 max_fails=3 fail_timeout=3s; server server2:443 weight=10 max_fails=3 fail_timeout=3s; server server3:443 weight=10 max_fails=3 fail_timeout=3s; server server4:443 weight=10 max_fails=3 fail_timeout=3s; keepalive 100; } proxy_connect_timeout 75; proxy_send_timeout 75; proxy_read_timeout 75; # do not transfer http request to next server on timeout proxy_next_upstream off; client_max_body_size 10m; proxy_buffering on; client_body_buffer_size 10m; proxy_buffer_size 32k; proxy_buffers 1024 32k; large_client_header_buffers 20 8k; location / { index index.html # needed for HTTPS #client_body_buffer_size 2m; #client_max_body_size 10m; #proxy_buffering on; #client_body_buffer_size 10m; #proxy_buffer_size 16k; #proxy_buffers 2048 8k; proxy_set_header X-Forwarded-Proto https; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_max_temp_file_size 0; proxy_pass https://secureProd; } #end location } #end server netstat -s Ip: 832857527 total packets received 324844 with invalid addresses 0 forwarded 0 incoming packets discarded 790582949 incoming packets delivered 969556973 requests sent out 21248 outgoing packets dropped 1516 fragments dropped after timeout 12544 reassemblies required 2920 packets reassembled ok 1516 packet reassembles failed 2981 fragments received ok 5962 fragments created Icmp: 33476 ICMP messages received 0 input ICMP message failed. ICMP input histogram: destination unreachable: 25335 timeout in transit: 31 echo requests: 8110 12995 ICMP messages sent 0 ICMP messages failed ICMP output histogram: destination unreachable: 3023 echo request: 1862 echo replies: 8110 IcmpMsg: InType3: 25335 InType8: 8110 InType11: 31 OutType0: 8110 OutType3: 3023 OutType8: 1862 Tcp: 68091793 active connections openings 17998625 passive connection openings 23993 failed connection attempts 12419 connection resets received 69 connections established 765636836 segments received 952207983 segments send out 563253 segments retransmited 6445 bad segments received. 39498 resets sent Udp: 20177272 packets received 1 packets to unknown port received. 0 packet receive errors 14411997 packets sent UdpLite: TcpExt: 3965 invalid SYN cookies received 20824 resets received for embryonic SYN_RECV sockets 1 packets pruned from receive queue because of socket buffer overrun 7 ICMP packets dropped because they were out-of-window 14642125 TCP sockets finished time wait in fast timer 217 packets rejects in established connections because of timestamp 38764858 delayed acks sent 177 delayed acks further delayed because of locked socket Quick ack mode was activated 42592 times 40 packets directly queued to recvmsg prequeue. 4 packets directly received from prequeue 143786688 packets header predicted 220243967 acknowledgments not containing data received 81227601 predicted acknowledgments 2200 times recovered from packet loss due to SACK data 2 bad SACKs received Detected reordering 1 times using SACK Detected reordering 2 times using time stamp 2 congestion windows partially recovered using Hoe heuristic TCPDSACKUndo: 27138 380316 congestion windows recovered after partial ack 1348 TCP data loss events TCPLostRetransmit: 4 14291 timeouts after SACK recovery 1252 timeouts in loss state 1658 fast retransmits 2316 forward retransmits 8351 retransmits in slow start 525882 other TCP timeouts 1005 sack retransmits failed 48 packets collapsed in receive queue due to low socket buffer 42624 DSACKs sent for old packets 131 DSACKs sent for out of order packets 462648 DSACKs received 37 DSACKs for out of order packets received 31 connections reset due to unexpected data 5596 connections reset due to early user close 136 connections aborted due to timeout TCPDSACKIgnoredNoUndo: 19299 TCPSackShifted: 239 TCPSackMerged: 455 TCPSackShiftFallback: 484195 TCPChallengeACK: 5832 TCPSYNChallenge: 6445 IpExt: InMcastPkts: 24767376 OutMcastPkts: 2360767 InBcastPkts: 39238283 InOctets: 309264166239 OutOctets: 352578759904 InMcastOctets: 1643443932 OutMcastOctets: 94433873 InBcastOctets: 5296847155 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255609,255609#msg-255609 From braulio at eita.org.br Thu Dec 18 11:24:41 2014 From: braulio at eita.org.br (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Thu, 18 Dec 2014 08:24:41 -0300 Subject: SPDY for http? Message-ID: Hello all, I have a nginx site configured with spdy on https. But after reading https://developers.google.com/speed/articles/spdy-for-mobile I decided to try spdy also for http. But strangely, after reloading the page on http, the browser keeps loading and never ends. https spdy works normally. I'm using nginx 1.7.8. Is http spdy supported? cheers, br?ulio -- "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua ideologia. Morra por sua ideologia" P.R. Sarkar EITA - Educa??o, Informa??o e Tecnologias para Autogest?o http://cirandas.net/brauliobo http://eita.org.br "Paramapurusha ? meu pai e Parama Prakriti ? minha m?e. O universo ? meu lar e todos n?s somos cidad?os deste cosmo. Este universo ? a imagina??o da Mente Macroc?smica, e todas as entidades est?o sendo criadas, preservadas e destru?das nas fases de extrovers?o e introvers?o do fluxo imaginativo c?smico. No ?mbito pessoal, quando uma pessoa imagina algo em sua mente, naquele momento, essa pessoa ? a ?nica propriet?ria daquilo que ela imagina, e ningu?m mais. Quando um ser humano criado mentalmente caminha por um milharal tamb?m imaginado, a pessoa imaginada n?o ? a propriedade desse milharal, pois ele pertence ao indiv?duo que o est? imaginando. Este universo foi criado na imagina??o de Brahma, a Entidade Suprema, por isso a propriedade deste universo ? de Brahma, e n?o dos microcosmos que tamb?m foram criados pela imagina??o de Brahma. Nenhuma propriedade deste mundo, mut?vel ou imut?vel, pertence a um indiv?duo em particular; tudo ? o patrim?nio comum de todos." Restante do texto em http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia From nginx-forum at nginx.us Thu Dec 18 12:52:05 2014 From: nginx-forum at nginx.us (khav) Date: Thu, 18 Dec 2014 07:52:05 -0500 Subject: Users not able to watch videos greater than 10 mins In-Reply-To: References: Message-ID: <71ac1978f8d75bce0288fdb5de36da98.NginxMailingListEnglish@forum.nginx.org> I am seeing these errors in /var/log/nginx/error.log 2014/12/18 12:50:28 [error] 45444#0: OCSP_basic_verify() failed (SSL: error:27069065:OCSP routines:OCSP_basic_verify:certificate verify error:Verify error:unable to get issuer certificate) while requesting certificate status, responder: ocsp2.globalsign.com 2014/12/18 12:50:40 [alert] 29754#0: *535730 open socket #146 left in connection 117 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255549,255621#msg-255621 From mdounin at mdounin.ru Thu Dec 18 12:58:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Dec 2014 15:58:51 +0300 Subject: 499 error reverse proxy In-Reply-To: <2121859c28383f8203de19b5d30e9663.NginxMailingListEnglish@forum.nginx.org> References: <2121859c28383f8203de19b5d30e9663.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141218125850.GB79300@mdounin.ru> Hello! On Wed, Dec 17, 2014 at 08:02:47PM -0500, tatroc wrote: > I folks, wondering if I could get your input. > > I have a nginx 1.7.3 configured as a reverse proxy. The system seems to work > fine when not under heavy load. However, there are periods of time when we > have extreme load because of ecom promotions. > > The $request_time gets up to 15 seconds. Normally the $request_time is only > a few seconds. The client machine is making a web service call to the Nginx > reverse proxy and the client web service making the call is designed to > abort the connection if it takes longer than 15 seconds. As a result that > is why you see all the $request_time's stop at 15 seconds. > > The $upstream_addr field is populated with ?-?. What would cause the > $request_time to take longer than 15 seconds? > It seems the nginx reverse proxy never passes the connection on to the > upstream server. > > 10.8.165.116 - - [13/Dec/2014:07:58:12 -0600] "POST > /Common/service/GateService/GateService.svc HTTP/1.1" 499 0 "-" "JAX-WS RI > 2.1.6 in JDK 6" "-" LB_req_Time: 15.000 WebSrv_rspTime: - Req_size: 8292 > HTTP_content_size: 7875 [...] As requests are POSTs, likely the problem is with sending request body. Probably due to network problems. -- Maxim Dounin http://nginx.org/ From luky-37 at hotmail.com Thu Dec 18 13:02:36 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 18 Dec 2014 14:02:36 +0100 Subject: SPDY for http? In-Reply-To: References: Message-ID: > Hello all, > > I have a nginx site configured with spdy on https. > > But after reading > https://developers.google.com/speed/articles/spdy-for-mobile I decided > to try spdy also for http. > > But strangely, after reloading the page on http, the browser keeps > loading and never ends. https spdy works normally. > > I'm using nginx 1.7.8. > > Is http spdy supported? There is no HTTP SPDY. Plaintext SPDY has a single use-case: when a frontent proxy handles SSL/TLS and negotiates (via NPN or ALPN) SPDY. You can not connect to plaintext SPDY via browsers of any kind. Lukas From nginx-forum at nginx.us Thu Dec 18 13:37:45 2014 From: nginx-forum at nginx.us (khav) Date: Thu, 18 Dec 2014 08:37:45 -0500 Subject: Users not able to watch videos greater than 10 mins In-Reply-To: <71ac1978f8d75bce0288fdb5de36da98.NginxMailingListEnglish@forum.nginx.org> References: <71ac1978f8d75bce0288fdb5de36da98.NginxMailingListEnglish@forum.nginx.org> Message-ID: <07c5e3a3edead5339f62472b9987a9a6.NginxMailingListEnglish@forum.nginx.org> I fixed the oscp error and i keep monitoring to see if the video stop issue happen again...Any idea how to fix the open socket issue Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255549,255626#msg-255626 From mdounin at mdounin.ru Thu Dec 18 13:52:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Dec 2014 16:52:50 +0300 Subject: Users not able to watch videos greater than 10 mins In-Reply-To: <07c5e3a3edead5339f62472b9987a9a6.NginxMailingListEnglish@forum.nginx.org> References: <71ac1978f8d75bce0288fdb5de36da98.NginxMailingListEnglish@forum.nginx.org> <07c5e3a3edead5339f62472b9987a9a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141218135249.GD79300@mdounin.ru> Hello! On Thu, Dec 18, 2014 at 08:37:45AM -0500, khav wrote: > I fixed the oscp error and i keep monitoring to see if the video stop issue > happen again...Any idea how to fix the open socket issue First of all it's good idea to upgrade to the latest version, 1.7.8 as of now. If you are able to reproduce the issue with the latest version, see this page for some tips on debugging: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Thu Dec 18 14:05:11 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 18 Dec 2014 17:05:11 +0300 Subject: SPDY for http? In-Reply-To: References: Message-ID: <4115594.NceToabDnf@vbart-workstation> On Thursday 18 December 2014 08:24:41 Br?ulio Bhavamitra wrote: > Hello all, > > I have a nginx site configured with spdy on https. > > But after reading > https://developers.google.com/speed/articles/spdy-for-mobile I decided > to try spdy also for http. > > But strangely, after reloading the page on http, the browser keeps > loading and never ends. https spdy works normally. > > I'm using nginx 1.7.8. > > Is http spdy supported? > AFAIK, the only browser that can use spdy over plain tcp connection is Chrome (or Chromium), but in this case it should be loaded with the "--use-spdy=no-ssl" command line argument. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Dec 18 16:39:05 2014 From: nginx-forum at nginx.us (khav) Date: Thu, 18 Dec 2014 11:39:05 -0500 Subject: Users not able to watch videos greater than 10 mins In-Reply-To: <20141218135249.GD79300@mdounin.ru> References: <20141218135249.GD79300@mdounin.ru> Message-ID: Maxim i am already using latest Nginx version (1.7.8) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255549,255636#msg-255636 From mdounin at mdounin.ru Thu Dec 18 19:44:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 18 Dec 2014 22:44:21 +0300 Subject: Users not able to watch videos greater than 10 mins In-Reply-To: References: <20141218135249.GD79300@mdounin.ru> Message-ID: <20141218194420.GL79300@mdounin.ru> Hello! On Thu, Dec 18, 2014 at 11:39:05AM -0500, khav wrote: > Maxim i am already using latest Nginx version (1.7.8) So follow second part of the advice - see http://wiki.nginx.org/Debugging. Some trivial things to check: - If you are using 3rd party modules, make sure you see the problem without them. - If you are using spdy, it may be good idea to try to see if it's what causes your problem. There is at least one report that it may cause socket leaks, at least in previous versions (1.6.1, AFAIR). -- Maxim Dounin http://nginx.org/ From nkadel at skyhookwireless.com Thu Dec 18 22:21:29 2014 From: nkadel at skyhookwireless.com (Nico Kadel-Garcia) Date: Thu, 18 Dec 2014 16:21:29 -0600 Subject: AWS load balancer, nginx, and Tomcat configuration help Message-ID: <8979372ED3EAC34AB51976E8171DE2BC3FC154CF@MBX48.exg5.exghost.com> I've been reviewing various web pages about and mailing list references, and am hoping for a canonical answer. I've got a customized Tomcat configuration in AWS, and need to load balance multiple instances on each host of a load-balanced pool in AWS for a testable configuration. I'm using the AWS ELB load balancers in front of all the AWS hosts, and just started running nginx 1.6.2 with the relevant realip module compiled in to spread the load even further among multiple tomcat instances on each host. Can anyone confirm for that that they have AWS based hosts with the ELB load balancer in front, and nginx and tomcat doing correctly recording the connecting IP address in the tomcat logs? Or can point out issues with this configuration? I'm concerned that I've missed something needed in the Tomcat configuration. That was apparently working well with just the ELB load balancer in place. http { # standard nginx settings left out left out of email # Recommended AWS settings from various Google documents set_real_ip_from 0.0.0.0/0; real_ip_header X-Forwarded-For; real_ip_recursive on; # Recommended values, the remote IP addresses are showing up in /var/log/nginx/access.log log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; server { listen 80 default_server; server_name _; location / { proxy_pass http://tomcat_servers; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } # nginx package standard values error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } } # HTTPS not currently used # Local tomcat instances upstream tomcat_servers { server 127.0.0.1:8080; server 127.0.0.1:8081; } } Nico Kadel-Garcia Lead DevOps Engineer nkadel at skyhookwireless.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists-nginx at swsystem.co.uk Fri Dec 19 11:01:30 2014 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Fri, 19 Dec 2014 11:01:30 +0000 Subject: SPDY for http? In-Reply-To: References: Message-ID: <5494058A.3050909@swsystem.co.uk> On 18/12/2014 13:02, Lukas Tribus wrote: >> Hello all, >> >> I have a nginx site configured with spdy on https. >> >> But after reading >> https://developers.google.com/speed/articles/spdy-for-mobile I decided >> to try spdy also for http. >> >> But strangely, after reloading the page on http, the browser keeps >> loading and never ends. https spdy works normally. >> >> I'm using nginx 1.7.8. >> >> Is http spdy supported? > > There is no HTTP SPDY. Plaintext SPDY has a single use-case: > when a frontent proxy handles SSL/TLS and negotiates (via NPN > or ALPN) SPDY. > > You can not connect to plaintext SPDY via browsers of any kind. Now I'm curious. I have a setup that uses nginx to terminate SSL (listen 443 ssl spdy) that proxies to varnish, which in turn proxies and routes to various nginx servers with only a listen 80 directive. If I'm understanding your statement correctly, if varnish and the backend nginx supported plaintext spdy is it possible for a spdy connection all the way? Then I guess the real question becomes is there any advantage to this? Steve. From luky-37 at hotmail.com Fri Dec 19 11:27:39 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Fri, 19 Dec 2014 12:27:39 +0100 Subject: SPDY for http? In-Reply-To: <5494058A.3050909@swsystem.co.uk> References: , , <5494058A.3050909@swsystem.co.uk> Message-ID: > Now I'm curious. > > I have a setup that uses nginx to terminate SSL (listen 443 ssl spdy) > that proxies to varnish, which in turn proxies and routes to various > nginx servers with only a listen 80 directive. > > If I'm understanding your statement correctly, if varnish and the > backend nginx supported plaintext spdy is it possible for a spdy > connection all the way? Correct. > Then I guess the real question becomes is there?any advantage to this? That depends on the situation. Is the connection between your frontend nginx and varnish/nginx backends high latency? Do you have big per connection costs from conntrack, etc? If on the other hand we are talking about a LAN here, then there is probably no point in doing so. This is most helpful when your frontend proxy doesn't?support SDPY (for example haproxy), but terminates SSL/TLS. This?way, you can tunnel plaintext SPDY to a nginx backend, without changing your architecture or replacing the frontend proxy software, as long as the frontend is capable of negotiation via NPN or ALPN. Lukas From braulio at eita.org.br Fri Dec 19 11:40:44 2014 From: braulio at eita.org.br (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Fri, 19 Dec 2014 08:40:44 -0300 Subject: SPDY for http? In-Reply-To: References: Message-ID: Thank yo Lukas! BTW, spdy is probably slower than http. http://www.guypo.com/not-as-spdy-as-you-thought/ On Thu, Dec 18, 2014 at 10:02 AM, Lukas Tribus wrote: >> Hello all, >> >> I have a nginx site configured with spdy on https. >> >> But after reading >> https://developers.google.com/speed/articles/spdy-for-mobile I decided >> to try spdy also for http. >> >> But strangely, after reloading the page on http, the browser keeps >> loading and never ends. https spdy works normally. >> >> I'm using nginx 1.7.8. >> >> Is http spdy supported? > > There is no HTTP SPDY. Plaintext SPDY has a single use-case: > when a frontent proxy handles SSL/TLS and negotiates (via NPN > or ALPN) SPDY. > > You can not connect to plaintext SPDY via browsers of any kind. > > > > Lukas > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua ideologia. Morra por sua ideologia" P.R. Sarkar EITA - Educa??o, Informa??o e Tecnologias para Autogest?o http://cirandas.net/brauliobo http://eita.org.br "Paramapurusha ? meu pai e Parama Prakriti ? minha m?e. O universo ? meu lar e todos n?s somos cidad?os deste cosmo. Este universo ? a imagina??o da Mente Macroc?smica, e todas as entidades est?o sendo criadas, preservadas e destru?das nas fases de extrovers?o e introvers?o do fluxo imaginativo c?smico. No ?mbito pessoal, quando uma pessoa imagina algo em sua mente, naquele momento, essa pessoa ? a ?nica propriet?ria daquilo que ela imagina, e ningu?m mais. Quando um ser humano criado mentalmente caminha por um milharal tamb?m imaginado, a pessoa imaginada n?o ? a propriedade desse milharal, pois ele pertence ao indiv?duo que o est? imaginando. Este universo foi criado na imagina??o de Brahma, a Entidade Suprema, por isso a propriedade deste universo ? de Brahma, e n?o dos microcosmos que tamb?m foram criados pela imagina??o de Brahma. Nenhuma propriedade deste mundo, mut?vel ou imut?vel, pertence a um indiv?duo em particular; tudo ? o patrim?nio comum de todos." Restante do texto em http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia From nginx-forum at nginx.us Fri Dec 19 15:03:13 2014 From: nginx-forum at nginx.us (jontobs) Date: Fri, 19 Dec 2014 10:03:13 -0500 Subject: Nginx "I'm exiting" error w/ TokuMX Message-ID: <9c21f4f15cec04b8726de8cffcf621c9.NginxMailingListEnglish@forum.nginx.org> Hello Everyone, We (tokutek www.tokutek.com) have a customer that is trying to connect to TokuMX with Nginx and is getting a only an "I'm exiting" message. If you're not familiar with TokuMX, it is a more efficient, higher performance fork of MongoDB (also open source). Application side, we have changed nothing (same drivers, language, wire protocol, etc). Can someone explain what the connection process is and maybe point us to some code? Also, any plans to work with TokuMX in the future? Our code is available on github (https://github.com/Tokutek/mongo) Thanks, Jon Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255684,255684#msg-255684 From chencw1982 at gmail.com Fri Dec 19 15:57:45 2014 From: chencw1982 at gmail.com (Chuanwen Chen) Date: Fri, 19 Dec 2014 23:57:45 +0800 Subject: [ANNOUNCE] Tengine-2.1.0 released Message-ID: Hi folks, We are very excited to announce that Tengine-2.1.0 (development version) has been released. You can either checkout the source code from GitHub: https://github.com/alibaba/tengine or download the tarball directly: http://tengine.taobao.org/download/tengine-2.1.0.tar.gz The highlight of this release is the SO_REUSEPORT option support, with which Tengine can have a performance improvement of up to 200% compared to Nginx, according to our simple benchmark: http://tengine.taobao.org/document/benchmark.html. Besides, resolving upstream domain names on the fly has been supported, so it is safe to change the IP address of an upstream server after starting or reloading. Tengine is now based on Nginx-1.6.2. And the full changelog is as follows: *) Feature: added support for collecting the running status of Tengine according to specific key (domain, url, etc). (cfsego) *) Feature: support the SO_REUSEPORT option, to improve performance on multicore systems. (monadbobo) *) Feature: support for resolving upstream domain names on the fly. (InfoHunter) *) Feature: support for rewriting to named locations. (yzprofile) *) Feature: added two parameters 'crop_keepx' and 'crop_keepy' to the directive 'image_filter'. (Lax) *) Feature: support for saving SSL sessions in consistent_hash module and session_sticky module. (dinic) *) Feature: support for compiling Tengine automatically in travis-ci.org. (Jamyn) *) Feature: support for FastCGI health check. (yzprofile) *) Feature: enhanced sysguard module. (InfoHunter) *) Feature: added a variable '$normalized_request', to get normalized request URIs. (yunkai) *) Feature: added wildcard support for 'include' directive in 'dso' block. (monadbobo) *) Feature: added the 'gzip_clear_etag' directive. (taoyuanyuan) *) Feature: added the 'unprintable' parameter to the 'log_escape' directive. (skoo87) *) Change: merged changes from nginx-1.6.2. (cfsego, taoyuanyuan, chobits) *) Change: now the order of servers in an upstream are random when initialized. (taoyuanyuan) *) Change: slab allocator free pages defragmentation. (chobits) *) Bugfix: SPDY/3 dropped the "delayed" flag when finalizing connection. (chobits) *) Bugfix: fixed SPDY/3 connection leak. (chobits) *) Bugfix: now don't truncate value of key to 255 bytes in limit_req module. (chobits) *) Bugfix: failed to parse /etc/resolv.conf with IPv6 addresses. (lifeibo) *) Bugfix: upstream rbtree bugfix. (taoyuanyuan) See our website for more details: http://tengine.taobao.org Have fun! -------------- next part -------------- An HTML attachment was scrubbed... URL: From t.spencer at spacesharing.eu Sat Dec 20 22:36:24 2014 From: t.spencer at spacesharing.eu (Tim Spencer | Spacesharing GmbH) Date: Sat, 20 Dec 2014 23:36:24 +0100 Subject: nginx proxy_pass configuration to virtualhost Message-ID: <5495F9E8.6060200@spacesharing.eu> Hi, I would like to redirect to an external URL which is hosted as a apache virtual host. nginx resolves the host of the url which obviously does little to resolve to the correct web root on the server. | server { server_name localhost; location / { proxy_set_header Host $host; proxy_pass http://www.urlforvirtualhost.com; } } | The question is how do I allow proxy_pass without nginx resolving the ip-address of the host? I have googled but obviously too many search result feature nginx configuration for virtual hosts. Any hints would be kindly appreciated. Lo5t -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Dec 20 23:18:03 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Sat, 20 Dec 2014 18:18:03 -0500 Subject: Exclude ip's from Nginx limit_req zone Message-ID: <983e911639d31a9f5ee461de0253b776.NginxMailingListEnglish@forum.nginx.org> Hi I am using this code to limit connections from ip's : Main nginx config: limit_conn_zone $binary_remote_addr zone=alpha:8m; limit_req_zone $binary_remote_addr zone=delta:8m rate=40r/s; Domain nginx conf: limit_conn alpha 5; limit_req zone=delta burst=80 nodelay; So a user can create only 5 connections per ip and can have 40 requests with a burst up to 80 connections. Now i want to exclude Cloudflare ip's from this connection limits. Any ideas how can i do it? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255697#msg-255697 From chrisstankevitz at gmail.com Sat Dec 20 23:46:08 2014 From: chrisstankevitz at gmail.com (Chris Stankevitz) Date: Sat, 20 Dec 2014 15:46:08 -0800 Subject: "One Time" authentication (+reverse proxy, pam, radius) Message-ID: Hello, I want to create a "reverse" proxy. I want users of the reverse proxy to authenticate to a radius server. I accomplished this by: nginx.conf: server { listen 443 ssl; server_name x.y.com; ssl_certificate /usr/local/etc/ssl/x.y.com.chain.crt; ssl_certificate_key /usr/local/etc/ssl/x.y.com.key; location / { auth_pam "Secure Zone"; auth_pam_service_name "nginx"; proxy_pass http://x.y.local; } } pam.d/nginx: auth required pam_radius.so This works... except the RADIUS password is actually a "one time password". It appears the web client retransmits the previously-accepted username/password for each proxied page. This will not work when using OTP (one time passwords). Can anyone suggest a way to achieve: 1. reverse proxy 2. the reverse-proxy authenticates the user (ideally using RADIUS or PAM) 3. the authentication is "cached" and not re-submitted for each page visited I imagine the only way to do this is to perform "authentication" in the "application layer" using some kind of custom CGI and cookies. Thank you, Chris From nginx-forum at nginx.us Sun Dec 21 01:08:01 2014 From: nginx-forum at nginx.us (rmkml) Date: Sat, 20 Dec 2014 20:08:01 -0500 Subject: Open source project Announcement: ETPLC support NGINX log format Message-ID: Hello, I am pround to announce my open source project named ETPLC support NGINX log format, for checking ~9088 Threats on your Web server or Proxy Logs! go to http://etplc.org for checking and downloading... (or http://sourceforge.net/projects/etplc/) ETPLC project support Perl script or Python v2 script. (and Python v3 but not updated recently) Any comments/ideas/suggestions/volunteers/flames are welcome! Thx Community and @EmergingThreats Open Signature. Happy Detecting with ETPLC @Rmkml Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255701,255701#msg-255701 From nginx-forum at nginx.us Sun Dec 21 10:25:30 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 21 Dec 2014 05:25:30 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <983e911639d31a9f5ee461de0253b776.NginxMailingListEnglish@forum.nginx.org> References: <983e911639d31a9f5ee461de0253b776.NginxMailingListEnglish@forum.nginx.org> Message-ID: hi, does this link helps? > http://gadelkareem.com/2012/03/25/limit-requests-per-ip-on-nginx-using-httplimitzonemodule-and-httplimitreqmodule-except-whitelist/ cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255703#msg-255703 From nginx-forum at nginx.us Sun Dec 21 10:31:47 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 21 Dec 2014 05:31:47 -0500 Subject: nginx proxy_pass configuration to virtualhost In-Reply-To: <5495F9E8.6060200@spacesharing.eu> References: <5495F9E8.6060200@spacesharing.eu> Message-ID: Hi tim, > Hi, > > I would like to redirect to an external URL which is hosted as a > apache > virtual host. redirect or proxy_pass? correct wording is important here > nginx resolves the host of the url which obviously does little to > resolve to the correct web root on the server. i dont understand what you mean here. if nginx doesnt resolve a dns-name how should it know how to deal with it? dns-names are for humans. > > | server { > server_name localhost; > location / { > proxy_set_header Host $host; > proxy_pass http://www.urlforvirtualhost.com; > } > } > | > > The question is how do I allow proxy_pass without nginx resolving the > ip-address of the host? but why dont you want nginx to resolve the IP? i'm not sure this will work as expected, except you put in the ip. but then the apache on the other side should be configred with the ip in the virtualhost cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255696,255704#msg-255704 From nginx at mfriebe.de Sun Dec 21 10:49:53 2014 From: nginx at mfriebe.de (Martin Frb) Date: Sun, 21 Dec 2014 10:49:53 +0000 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: References: <983e911639d31a9f5ee461de0253b776.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5496A5D1.5020709@mfriebe.de> > limit_conn_zone $binary_remote_addr zone=alpha:8m; > limit_req_zone $binary_remote_addr zone=delta:8m rate=40r/s; > > Domain nginx conf: > > limit_conn alpha 5; > limit_req zone=delta burst=80 nodelay; > > > So a user can create only 5 connections per ip and can have 40 > requests with a burst up to 80 connections. > > Now i want to exclude Cloudflare ip's from this connection limits. If memory serves right you can use location / { if ($remote_addr != 1.2.3.4) { error_page 404 = @no_whitelist; return 404; } # whitelisted } location @no_whitelist { limit_conn adalpha 5; } From nginx-forum at nginx.us Sun Dec 21 12:17:18 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Sun, 21 Dec 2014 07:17:18 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <5496A5D1.5020709@mfriebe.de> References: <5496A5D1.5020709@mfriebe.de> Message-ID: <4343442367b651cec376c7fbddbf3b88.NginxMailingListEnglish@forum.nginx.org> @mex Yes it seems that it will help me :) But on the code he is not using limit_conn_zone at all .... My code: http{ limit_req_zone $binary_remote_addr zone=delta:8m rate=30r/s; limit_conn_zone $binary_remote_addr zone=alpha:8m; New code: http{ limit_req_zone $binary_remote_addr zone=notabot:5m rate=200r/s; limit_req zone=notabot burst=200 nodelay; What is the difference? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255706#msg-255706 From t.spencer at spacesharing.eu Sun Dec 21 13:10:18 2014 From: t.spencer at spacesharing.eu (Tim Spencer | Spacesharing GmbH) Date: Sun, 21 Dec 2014 14:10:18 +0100 Subject: nginx proxy_pass configuration to virtualhost In-Reply-To: References: <5495F9E8.6060200@spacesharing.eu> Message-ID: <5496C6BA.7050309@spacesharing.eu> On 21/12/14 11:31, mex wrote: > Hi tim, > >> Hi, >> >> I would like to redirect to an external URL which is hosted as a >> apache >> virtual host. > redirect or proxy_pass? correct wording is important here > preferred proxy_pass >> nginx resolves the host of the url which obviously does little to >> resolve to the correct web root on the server. > i dont understand what you mean here. if nginx doesnt resolve a dns-name > how should it know how to deal with it? dns-names are for humans. yeah the issue does not seem to be on my side, I have have no issue of nginx resolving the ip but it seems that the hosting company I would like to proxy_pass to has the virtual hosts configured on the dns names > >> | server { >> server_name localhost; >> location / { >> proxy_set_header Host $host; >> proxy_pass http://www.urlforvirtualhost.com; >> } >> } >> | >> >> The question is how do I allow proxy_pass without nginx resolving the >> ip-address of the host? > but why dont you want nginx to resolve the IP? > > > i'm not sure this will work as expected, except you put in the ip. > but then the apache on the other side should be configred > with the ip in the virtualhost generally I concur but how would they be able to, in est multiple sites on the same public ip but differentiate on dns names (this is also achievable on nginx) with the default directing to nowhere. this would probably mean proxy_pass to a host configured to serve multiple pages as following http { index index .html; server { server_name www.domain1.com; access_log logs/domain1.access.log main; root /var/www/domain1.com/htdocs; } server { server_name www.domain2.com; access_log logs/domain2.access.log main; root /var/www/domain2.com/htdocs; } } would not work. I presumed there was a option to force nginx to not resolve but I can see now this is probably not the case and I would need to change to redirect instead of proxy_pass. > > > cheers, mex > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255696,255704#msg-255704 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Dec 21 14:39:54 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Dec 2014 14:39:54 +0000 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <983e911639d31a9f5ee461de0253b776.NginxMailingListEnglish@forum.nginx.org> References: <983e911639d31a9f5ee461de0253b776.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141221143954.GP15670@daoine.org> On Sat, Dec 20, 2014 at 06:18:03PM -0500, ASTRAPI wrote: Hi there, > limit_conn_zone $binary_remote_addr zone=alpha:8m; > limit_req_zone $binary_remote_addr zone=delta:8m rate=40r/s; > limit_conn alpha 5; > limit_req zone=delta burst=80 nodelay; > Now i want to exclude Cloudflare ip's from this connection limits. Instead of using $binary_remote_addr, use a $new_variable which is empty for Cloudflare IPs and equal to $binary_remote_addr for other IPs. Ideally, something like geo $new_variable { default $binary_remote_addr; # things that match cloudflare 10.0.0.0/8 ""; } except that "geo" does not expand $variables. So instead, use "geo" to set a flag, and then use "map" to set the value you want: geo $use_new_variable { default 1; # things that match cloudflare 10.0.0.0/8 0; } map $use_new_variable $new_variable { default $binary_remote_addr; 0 ""; } (Other possibilities exist.) f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Dec 21 15:11:33 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Sun, 21 Dec 2014 10:11:33 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <20141221143954.GP15670@daoine.org> References: <20141221143954.GP15670@daoine.org> Message-ID: Thanks for your replies but i am confused now :( Can anyone please try to post: What i must add to main nginx config at: http { ? and what to add to the nginx domain config file at: server { ? Target is to have connections limit per ip 20 and requests limits per ip to 40 and requests burst up to 80 ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255710#msg-255710 From reallfqq-nginx at yahoo.fr Sun Dec 21 16:23:33 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 21 Dec 2014 17:23:33 +0100 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: References: <20141221143954.GP15670@daoine.org> Message-ID: I am highly suspicious about the content found at the address pointed by the link provided by mex. Unless I am mistaken, the variable filled by the geo module is not used anywhere else... thus I guess the limiting works OK, but the 'white-list' feature probably does not work, as it was expected/advertised. TL;DR: it probably does not work. ========== Francis gave you an answer which is working. I will try to explain it with other words, hoping you will understand what to do. The limit_* modules (req and conn) filter requests based on a variable, which content is used as a key. If you use $binary_remote_addr there, nginx will keep counters per (non empty) each value of the key and limit each of them. In that case, each unique non-empty value is the binary IP address of a client. Now, you want to exclude clients from that list, so you cannot use it directly. The trick you can use to exclude requests from being limited by the limit_* module is ensuring that requests that should be unlimited provide an empty value for the variable used by these modules. Since you base your limit_* behavior on IP addresses, you thus need to set an "empty" IP address for whitelisted addresses, so they are unlimited. How to get that filtered list? nginx's map module allows you to fill a variable depending on the value of another, used as a key. That idea there is that if your key says "should not limit" (or, say, 0), the new variable should be empty, while in all other cases the new variable should contain $binary_remote_addr. That gives you the last map Francis provided: map $should_limit $filter { default $binary_remote_addr; 0 ""; } You wanna use the $filter variable on your limiter. Now, for each request, you want to fill up this $should_limit variable with 0 for unlimited requests and anything else (say, 1) to limit them. That is where the geo module kicks in, where you set the default value of the variable it is working on with 1, and put rules matching the white-listed IP addresses associated with the value 0. Read the answer from Francis in the light of this attempt at explaining it step-by-step. The goal of the first part of his message was to explain why this 2-steps process is mandatory, due to limitations in the inner workings of the geo directive. Hoping to have cleared things a little... --- *B. R.* On Sun, Dec 21, 2014 at 4:11 PM, ASTRAPI wrote: > Thanks for your replies but i am confused now :( > > Can anyone please try to post: > > What i must add to main nginx config at: > > http { ? > > > and what to add to the nginx domain config file at: > > server { ? > > > Target is to have connections limit per ip 20 and requests limits per ip to > 40 and requests burst up to 80 ! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255697,255710#msg-255710 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Dec 21 18:36:40 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Sun, 21 Dec 2014 13:36:40 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: References: Message-ID: <392dc0d4cdc08ccab4b568e05620e927.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply B.R Not very clear yet as my english is not good at all :( Can you please try to post: What i must add to main nginx config at: http { ? and what to add to the nginx domain config file at: server { ? Cloudflare ip's that i want to exclude: 199.27.128.0/21 173.245.48.0/20 103.21.244.0/22 103.22.200.0/22 103.31.4.0/22 141.101.64.0/18 108.162.192.0/18 190.93.240.0/20 188.114.96.0/20 197.234.240.0/22 198.41.128.0/17 162.158.0.0/15 104.16.0.0/12 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255712#msg-255712 From jhs at mojatatu.com Sun Dec 21 19:40:56 2014 From: jhs at mojatatu.com (Jamal Hadi Salim) Date: Sun, 21 Dec 2014 14:40:56 -0500 Subject: netdev 01 Message-ID: <54972248.60305@mojatatu.com> Sorry for the spam but i wasnt sure who else to contact. I'd like to invite people from nginx community to submit proposals to netdev01. https://www.netdev01.org/participate Tutorials on both developer level and/or user level will be welcome. But also this could be an opportunity to write papers on new ideas or general ideas that have not been documented before on nginx. Sorry for the solicitation again - we want to make the conference not too kernel centric and i have heard good things about nginx. so please submit proposals. please CC me if you want to respond as i am not subscribed. cheers, jamal From francis at daoine.org Sun Dec 21 19:43:24 2014 From: francis at daoine.org (Francis Daly) Date: Sun, 21 Dec 2014 19:43:24 +0000 Subject: nginx proxy_pass configuration to virtualhost In-Reply-To: <5495F9E8.6060200@spacesharing.eu> References: <5495F9E8.6060200@spacesharing.eu> Message-ID: <20141221194324.GQ15670@daoine.org> On Sat, Dec 20, 2014 at 11:36:24PM +0100, Tim Spencer | Spacesharing GmbH wrote: Hi there, > I would like to redirect to an external URL which is hosted as a > apache virtual host. There may be subtleties about your configuration that have not been included; but I'm not seeing any obvious reason why just changing one line should not cause it all to work for you... > location / { > proxy_set_header Host $host; Change that to be proxy_set_header Host www.urlforvirtualhost.com; and test again. > proxy_pass http://www.urlforvirtualhost.com; > } > The question is how do I allow proxy_pass without nginx resolving > the ip-address of the host? That question is not very clear to me, I'm afraid. If the above change is insufficient (and it does include assumptions such as: you will know when the IP address of the upstream www.urlforvirtualhost.com changes and will restart nginx then), then it may be helpful for you to describe what you want to happen, in terms of what machine makes what request to what other machine. Good luck with it, f -- Francis Daly francis at daoine.org From chrisstankevitz at gmail.com Mon Dec 22 15:17:27 2014 From: chrisstankevitz at gmail.com (Chris Stankevitz) Date: Mon, 22 Dec 2014 07:17:27 -0800 Subject: "One Time" authentication (+reverse proxy, pam, radius) In-Reply-To: References: Message-ID: On Sat, Dec 20, 2014 at 3:46 PM, Chris Stankevitz wrote: > 2. the reverse-proxy authenticates the user (ideally using RADIUS or PAM) > > 3. the authentication is "cached" and not re-submitted for each page visited A followup: I accomplished this using apache 2.4 and mod_auth_xradius Chris From jeroenooms at gmail.com Mon Dec 22 19:59:48 2014 From: jeroenooms at gmail.com (Jeroen Ooms) Date: Mon, 22 Dec 2014 11:59:48 -0800 Subject: gunzip on debian Message-ID: I would like to use the gunzip module to serve cached, gzipped responses to clients that do not support gzip. I am running an Ubuntu 14.04 server. According to this post [1] the nginx-extras package includes support for gunzip, but when I add the 'gunzip on;' directive to my config I get an error that the directive is unknown. Is the the gunzip module is available from another nginx deb package on Debian/Ubuntu? My stack needs to work with the standard builds of nginx that are included with Debian, so I can't build from source. If not, what is the best alternative to deal with Vary: Accept-Encoding in nginx? My back-end only uses two variations: either "Content-Encoding: gzip" or no Content-Encoding at all. Is there a 'smart' value I can add to my proxy_cache_key so that it will normalize all of the possible variations of 'Accept-Encoding' containing 'gzip' ? [1] http://www.dotdeb.org/2013/04/28/nginx-1-4-0/ From semenukha at gmail.com Mon Dec 22 20:43:44 2014 From: semenukha at gmail.com (Styopa Semenukha) Date: Mon, 22 Dec 2014 15:43:44 -0500 Subject: gunzip on debian In-Reply-To: References: Message-ID: <2976949.sQfFVmbDAR@tornado> On Monday, December 22, 2014 11:59:48 AM Jeroen Ooms wrote: > I would like to use the gunzip module to serve cached, gzipped > responses to clients that do not support gzip. I am running an Ubuntu > 14.04 server. According to this post [1] the nginx-extras package > includes support for gunzip, but when I add the 'gunzip on;' directive > to my config I get an error that the directive is unknown. > > Is the the gunzip module is available from another nginx deb package > on Debian/Ubuntu? My stack needs to work with the standard builds of > nginx that are included with Debian, so I can't build from source. > > If not, what is the best alternative to deal with Vary: > Accept-Encoding in nginx? My back-end only uses two variations: either > "Content-Encoding: gzip" or no Content-Encoding at all. Is there a > 'smart' value I can add to my proxy_cache_key so that it will > normalize all of the possible variations of 'Accept-Encoding' > containing 'gzip' ? > > > > [1] http://www.dotdeb.org/2013/04/28/nginx-1-4-0/ Looks like Gunzip support is not enabled for any standard Debian package: $ apt-cache search gunzip $ cat /etc/debian_version jessie/sid >From your question I understand that you want to unify caching for requests with and without Gzip support. AFAIK, Gunzip is not relevant for this task. This might be useful: http://forum.nginx.org/read.php?2,222382,222436 -- Best regards, Styopa Semenukha. From nginx-forum at nginx.us Tue Dec 23 10:58:00 2014 From: nginx-forum at nginx.us (magal) Date: Tue, 23 Dec 2014 05:58:00 -0500 Subject: Serve different pages for different IP Message-ID: <09a34d2cd49434126902ecefd3d76f61.NginxMailingListEnglish@forum.nginx.org> I have one domain and I want to serve different pages based on the Client IP. Nginx refuse different server with same server_name and both location must be / . Can you help me? Tnk Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255744,255744#msg-255744 From vbart at nginx.com Tue Dec 23 12:09:22 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 23 Dec 2014 15:09:22 +0300 Subject: gunzip on debian In-Reply-To: References: Message-ID: <4303827.XzdTjEIesD@vbart-workstation> On Monday 22 December 2014 11:59:48 Jeroen Ooms wrote: > I would like to use the gunzip module to serve cached, gzipped > responses to clients that do not support gzip. I am running an Ubuntu > 14.04 server. According to this post [1] the nginx-extras package > includes support for gunzip, but when I add the 'gunzip on;' directive > to my config I get an error that the directive is unknown. > > Is the the gunzip module is available from another nginx deb package > on Debian/Ubuntu? My stack needs to work with the standard builds of > nginx that are included with Debian, so I can't build from source. > [..] You can use the official nginx repository: http://nginx.org/en/linux_packages.html wbr, Valentin V. Bartenev From francis at daoine.org Tue Dec 23 13:17:30 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 23 Dec 2014 13:17:30 +0000 Subject: Serve different pages for different IP In-Reply-To: <09a34d2cd49434126902ecefd3d76f61.NginxMailingListEnglish@forum.nginx.org> References: <09a34d2cd49434126902ecefd3d76f61.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141223131730.GR15670@daoine.org> On Tue, Dec 23, 2014 at 05:58:00AM -0500, magal wrote: Hi there, > I have one domain and I want to serve different pages based on the Client > IP. > > Nginx refuse different server with same server_name and both location must > be / . > > Can you help me? Set a variable based on the client IP ($remote_addr), using "geo" or "map" or perhaps "if/set". Then, depending on what exactly you want to do, perhaps set "root" to that variable value so that different clients see different parts of the filesystem. When I have had to do this before, I only handled the hard-coded first request the clients made specially, and had the web server issue a redirect to the client-specific url -- so any client could access any other client content if it asked for it directly, but the default was that each client would get its own content after one extra http request. f -- Francis Daly francis at daoine.org From r_o_l_a_n_d at hotmail.com Tue Dec 23 15:10:09 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Tue, 23 Dec 2014 17:10:09 +0200 Subject: Ignore content-type while forwarding to backend proxy Message-ID: one of my apps, specifically those on IOS forwarded a content-type header..this is causing my backend server to mess up its security signature check.... i need to be able to ignore content-type headers... assigning the header as empty "" does not work, (proxy_set_header Accept-Encoding "";)and also tried mapping the content-type and using the variable... though that did not work either as backend is very strict in that regard... i need to be able to ignore it completely... i tried adding proxy_hide_header Content-Type; and tested,it's still sending content-type while directing traffic to backend.. any advice? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 23 15:39:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Dec 2014 18:39:10 +0300 Subject: nginx-1.7.9 Message-ID: <20141223153910.GY79300@mdounin.ru> Changes with nginx 1.7.9 23 Dec 2014 *) Feature: variables support in the "proxy_cache", "fastcgi_cache", "scgi_cache", and "uwsgi_cache" directives. *) Feature: variables support in the "expires" directive. *) Feature: loading of secret keys from hardware tokens with OpenSSL engines. Thanks to Dmitrii Pichulin. *) Feature: the "autoindex_format" directive. *) Bugfix: cache revalidation is now only used for responses with 200 and 206 status codes. Thanks to Piotr Sikora. *) Bugfix: the "TE" client request header line was passed to backends while proxying. *) Bugfix: the "proxy_pass", "fastcgi_pass", "scgi_pass", and "uwsgi_pass" directives might not work correctly inside the "if" and "limit_except" blocks. *) Bugfix: the "proxy_store" directive with the "on" parameter was ignored if the "proxy_store" directive with an explicitly specified file path was used on a previous level. *) Bugfix: nginx could not be built with BoringSSL. Thanks to Lukas Tribus. -- Maxim Dounin http://nginx.org/en/donation.html From mdounin at mdounin.ru Tue Dec 23 15:42:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 23 Dec 2014 18:42:50 +0300 Subject: Ignore content-type while forwarding to backend proxy In-Reply-To: References: Message-ID: <20141223154250.GC79300@mdounin.ru> Hello! On Tue, Dec 23, 2014 at 05:10:09PM +0200, Roland RoLaNd wrote: > one of my apps, specifically those on IOS forwarded a > content-type header..this is causing my backend server to mess > up its security signature check.... > i need to be able to ignore content-type headers... > assigning the header as empty "" does not work, > (proxy_set_header Accept-Encoding "";)and also tried mapping the > content-type and using the variable... though that did not work > either as backend is very strict in that regard... > > i need to be able to ignore it completely... > i tried adding proxy_hide_header Content-Type; > > and tested,it's still sending content-type while directing > traffic to backend.. > > any advice? To control headers sent by nginx to backends you have to use the "proxy_set_header" directive. If you want to hide Content-Type, use this: proxy_set_header Content-Type ""; More information can be found in the documentation here: http://nginx.org/r/proxy_set_header -- Maxim Dounin http://nginx.org/ From piotr.sikora at frickle.com Tue Dec 23 18:50:28 2014 From: piotr.sikora at frickle.com (Piotr Sikora) Date: Tue, 23 Dec 2014 19:50:28 +0100 Subject: [ANNOUNCE] ngx_cache_purge-2.3 Message-ID: Version 2.3 is now available at: http://labs.frickle.com/nginx_ngx_cache_purge/ GitHub repository is available at: https://github.com/FRiCKLE/ngx_cache_purge/ Changes: 2014-12-23 VERSION 2.3 * Fix compatibility with nginx-1.7.9+. Best regards, Piotr Sikora From nginx-forum at nginx.us Tue Dec 23 19:13:35 2014 From: nginx-forum at nginx.us (Guest13778) Date: Tue, 23 Dec 2014 14:13:35 -0500 Subject: Big file upload through proxy problem Message-ID: <0b6872511c467a3ffb08bed5fe78c7a1.NginxMailingListEnglish@forum.nginx.org> Hey! I have this interesting issue.. I have a setup that looks like this Nginx < - >Apache.. and when I try to upload files over proxy (nginx) then it gives me: 413 Request Entity Too Large (by nginx). Here is my config: http://pastebin.com/9t02sPtm Also Apache and PHP both have 200mb upload limit (upload works perfect there) It is able to upload tiny files.. like max up to 6kb I think.. but everything bigger than that will get instantly that error. Any ideas what might be wrong? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255760,255760#msg-255760 From nginx-forum at nginx.us Tue Dec 23 19:17:38 2014 From: nginx-forum at nginx.us (Guest13778) Date: Tue, 23 Dec 2014 14:17:38 -0500 Subject: Big file upload through proxy problem In-Reply-To: <0b6872511c467a3ffb08bed5fe78c7a1.NginxMailingListEnglish@forum.nginx.org> References: <0b6872511c467a3ffb08bed5fe78c7a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2b96761718a4f39ed3d551df4f1c0073.NginxMailingListEnglish@forum.nginx.org> Sorry, I forgot to post an example: # curl -v -F file=@test.tar.gz -T http:/mydomain.com * About to connect() to mydomain.com port 80 (#0) * Trying 192.168.15.1... connected * Connected to mydomain.com (192.168.15.1) port 80 (#0) > POST / HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: mydomain.com > Accept: */* > Content-Length: 3337675 > Expect: 100-continue > Content-Type: multipart/form-data; boundary=----------------------------a12017330dd6 > < HTTP/1.1 100 Continue < HTTP/1.1 413 Request Entity Too Large < Server: nginx 1.7.8 < Date: Tue, 23 Dec 2014 19:14:32 GMT < Content-Type: text/html < Content-Length: 204 < Connection: keep-alive < 413 Request Entity Too Large

413 Request Entity Too Large


nginx 1.7.8
* Connection #0 to host mydomain.com left intact * Closing connection #0 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255760,255761#msg-255761 From r_o_l_a_n_d at hotmail.com Wed Dec 24 06:53:13 2014 From: r_o_l_a_n_d at hotmail.com (Roland RoLaNd) Date: Wed, 24 Dec 2014 08:53:13 +0200 Subject: Ignore content-type while forwarding to backend proxy In-Reply-To: <20141223154250.GC79300@mdounin.ru> References: , <20141223154250.GC79300@mdounin.ru> Message-ID: thank you for the advice! i tried doing that before though it did not work so i thought there could be another solution..in any case tried that again, set it right before the proxypass condition and it's still passing the type through...may i show u my config to see what might be overrirding that ? > Date: Tue, 23 Dec 2014 18:42:50 +0300 > From: mdounin at mdounin.ru > To: nginx at nginx.org > Subject: Re: Ignore content-type while forwarding to backend proxy > > Hello! > > On Tue, Dec 23, 2014 at 05:10:09PM +0200, Roland RoLaNd wrote: > > > one of my apps, specifically those on IOS forwarded a > > content-type header..this is causing my backend server to mess > > up its security signature check.... > > i need to be able to ignore content-type headers... > > assigning the header as empty "" does not work, > > (proxy_set_header Accept-Encoding "";)and also tried mapping the > > content-type and using the variable... though that did not work > > either as backend is very strict in that regard... > > > > i need to be able to ignore it completely... > > i tried adding proxy_hide_header Content-Type; > > > > and tested,it's still sending content-type while directing > > traffic to backend.. > > > > any advice? > > To control headers sent by nginx to backends you have to use the > "proxy_set_header" directive. If you want to hide Content-Type, > use this: > > proxy_set_header Content-Type ""; > > More information can be found in the documentation here: > > http://nginx.org/r/proxy_set_header > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Dec 24 09:42:45 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Dec 2014 09:42:45 +0000 Subject: Ignore content-type while forwarding to backend proxy In-Reply-To: References: <20141223154250.GC79300@mdounin.ru> Message-ID: <20141224094245.GS15670@daoine.org> On Wed, Dec 24, 2014 at 08:53:13AM +0200, Roland RoLaNd wrote: Hi there, > i tried doing that before though it did not work so i thought there could be another solution..in any case tried that again, set it right before the proxypass condition and it's still passing the type through...may i show u my config to see what might be overrirding that ? > The following config snippet does for me what you say that you want: server { proxy_set_header Content-Type ""; location /app { proxy_pass http://127.0.0.1:10080; } } The response from port 10080 shows me that a Content-Type header was received by it when I comment the proxy_set_header line; and was not when I do not. I suspect that it will be useful if you can describe what exactly you want nginx to send to upstream. Be specific about "http request header" and "http request body"; and for best chance of help, make it easy for someone else to reproduce the problem that you are reporting. proxy_set_header only modifies the http request header sent. It does not modify any part of the http request body. In the case of (for example) multipart/form-data, the http request body can contain its own header-like data including Content-Disposition: and Content-Type:. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Dec 24 11:48:00 2014 From: nginx-forum at nginx.us (khav) Date: Wed, 24 Dec 2014 06:48:00 -0500 Subject: OCSP_check_validity() status expired Message-ID: I am seeing a lot of these errors in my /var/log/nginx/error.log [error] 11405#0: OCSP_check_validity() failed (SSL: error:2707307D:OCSP routines:OCSP_check_validity:status expired) while requesting certificate status, responder: ocsp2.globalsign.com How can i fix that Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255769,255769#msg-255769 From nginx-forum at nginx.us Wed Dec 24 13:05:38 2014 From: nginx-forum at nginx.us (ionsec) Date: Wed, 24 Dec 2014 08:05:38 -0500 Subject: OCSP_check_validity() status expired In-Reply-To: References: Message-ID: hi khav, try adding the following lines to your nginx website configuration file: ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate /etc/nginx/my_ssl_certs/ca-bundle.pem; # # note the PEM encoded X509 ca-bundle file should contain the ssl certificate # chain bundle (i.e. domain and intermediate CA certs) # resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 5s; wbr, ionsec Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255769,255770#msg-255770 From mdounin at mdounin.ru Wed Dec 24 13:05:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Dec 2014 16:05:48 +0300 Subject: OCSP_check_validity() status expired In-Reply-To: References: Message-ID: <20141224130548.GF79300@mdounin.ru> Hello! On Wed, Dec 24, 2014 at 06:48:00AM -0500, khav wrote: > I am seeing a lot of these errors in my /var/log/nginx/error.log > > [error] 11405#0: OCSP_check_validity() failed (SSL: error:2707307D:OCSP > routines:OCSP_check_validity:status expired) while requesting certificate > status, responder: ocsp2.globalsign.com > > How can i fix that The OCSP response returned by your CA is too old. Most likely, the problem is that time on your server is set incorrectly. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Dec 24 13:53:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Dec 2014 16:53:32 +0300 Subject: Big file upload through proxy problem In-Reply-To: <2b96761718a4f39ed3d551df4f1c0073.NginxMailingListEnglish@forum.nginx.org> References: <0b6872511c467a3ffb08bed5fe78c7a1.NginxMailingListEnglish@forum.nginx.org> <2b96761718a4f39ed3d551df4f1c0073.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141224135332.GH79300@mdounin.ru> Hello! On Tue, Dec 23, 2014 at 02:17:38PM -0500, Guest13778 wrote: > Sorry, I forgot to post an example: > > # curl -v -F file=@test.tar.gz -T http:/mydomain.com > * About to connect() to mydomain.com port 80 (#0) > * Trying 192.168.15.1... connected > * Connected to mydomain.com (192.168.15.1) port 80 (#0) > > POST / HTTP/1.1 > > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 > NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > > Host: mydomain.com > > Accept: */* > > Content-Length: 3337675 > > Expect: 100-continue > > Content-Type: multipart/form-data; > boundary=----------------------------a12017330dd6 > > > < HTTP/1.1 100 Continue > < HTTP/1.1 413 Request Entity Too Large The "100 Continue" response before the "413 Request Entity Too Large" suggests that something non-trivial happens in your setup - normally nginx will just return 413, without useless "100 Continue" before it. This may indicate, for example, that double proxying happens, and the error about too large body is returned by second nginx. Try looking into nginx error log, it should have additional information (in particular, it will indicate server block where the error was generated). Note though, that currently in your config logging level is set to "crit", i.e., logging is effectively switched off. You'll have to set some reasonable logging level to see what nginx has to say - at least "error" in this particular case. If still in doubt, a debugging log can be used to find out low-level details, see http://wiki.nginx.org/Debugging. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 24 14:01:09 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Wed, 24 Dec 2014 09:01:09 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <392dc0d4cdc08ccab4b568e05620e927.NginxMailingListEnglish@forum.nginx.org> References: <392dc0d4cdc08ccab4b568e05620e927.NginxMailingListEnglish@forum.nginx.org> Message-ID: Anyone please? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255773#msg-255773 From mdounin at mdounin.ru Wed Dec 24 14:32:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 24 Dec 2014 17:32:39 +0300 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: References: <392dc0d4cdc08ccab4b568e05620e927.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141224143238.GI79300@mdounin.ru> Hello! On Wed, Dec 24, 2014 at 09:01:09AM -0500, ASTRAPI wrote: > Anyone please? An example of how to whitelist addresses from limit_req can be found in the mailing list archives, for example here: http://mailman.nginx.org/pipermail/nginx/2012-July/034790.html Documentation on directives used can be found here: http://nginx.org/en/docs/http/ngx_http_geo_module.html http://nginx.org/en/docs/http/ngx_http_map_module.html http://nginx.org/en/docs/http/ngx_http_limit_req_module.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Dec 24 15:16:28 2014 From: nginx-forum at nginx.us (magal) Date: Wed, 24 Dec 2014 10:16:28 -0500 Subject: Serve different pages for different IP In-Reply-To: <20141223131730.GR15670@daoine.org> References: <20141223131730.GR15670@daoine.org> Message-ID: <6a61e9863204bfeb8be9d2274feb6bf2.NginxMailingListEnglish@forum.nginx.org> Thank you. My configurations were: 1. root /path_to_root1; if ($remote_addr = xx.xx.xx.xx) { set $document_root /path_to root2; } 2. map $remote_addr $document_root { default /path_to_root1; xx.xx.xx.xx /path_to root2; } 3. geo $document_root { default /path_to_root1; xx.xx.xx.xx /path_to_root2; } I tryed the three ways you suggested but nginx always answered: nginx: [emerg] the duplicate "document_root" variable. My configuration would be to have one document_root for a specific IP and another for the rest of the world. Did i make any mistake in my configuration? thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255744,255775#msg-255775 From kworthington at gmail.com Wed Dec 24 15:32:15 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 24 Dec 2014 10:32:15 -0500 Subject: [nginx-announce] nginx-1.7.9 In-Reply-To: <20141223153925.GZ79300@mdounin.ru> References: <20141223153925.GZ79300@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.9 for Windows http://goo.gl/DsVDe5 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Dec 23, 2014 at 10:39 AM, Maxim Dounin wrote: > Changes with nginx 1.7.9 23 Dec > 2014 > > *) Feature: variables support in the "proxy_cache", "fastcgi_cache", > "scgi_cache", and "uwsgi_cache" directives. > > *) Feature: variables support in the "expires" directive. > > *) Feature: loading of secret keys from hardware tokens with OpenSSL > engines. > Thanks to Dmitrii Pichulin. > > *) Feature: the "autoindex_format" directive. > > *) Bugfix: cache revalidation is now only used for responses with 200 > and 206 status codes. > Thanks to Piotr Sikora. > > *) Bugfix: the "TE" client request header line was passed to backends > while proxying. > > *) Bugfix: the "proxy_pass", "fastcgi_pass", "scgi_pass", and > "uwsgi_pass" directives might not work correctly inside the "if" and > "limit_except" blocks. > > *) Bugfix: the "proxy_store" directive with the "on" parameter was > ignored if the "proxy_store" directive with an explicitly specified > file path was used on a previous level. > > *) Bugfix: nginx could not be built with BoringSSL. > Thanks to Lukas Tribus. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Dec 24 15:47:14 2014 From: francis at daoine.org (Francis Daly) Date: Wed, 24 Dec 2014 15:47:14 +0000 Subject: Serve different pages for different IP In-Reply-To: <6a61e9863204bfeb8be9d2274feb6bf2.NginxMailingListEnglish@forum.nginx.org> References: <20141223131730.GR15670@daoine.org> <6a61e9863204bfeb8be9d2274feb6bf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141224154714.GT15670@daoine.org> On Wed, Dec 24, 2014 at 10:16:28AM -0500, magal wrote: Hi there, > My configuration would be to have one document_root for a specific IP and > another for the rest of the world. In each case, do not set $document_root. Instead, set $my_root_var (for example). Then separately, do root $my_root_var; f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Dec 24 19:36:50 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Wed, 24 Dec 2014 14:36:50 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <20141224143238.GI79300@mdounin.ru> References: <20141224143238.GI79300@mdounin.ru> Message-ID: <3dcb93230253e3a6eef53ac7d21b4cac.NginxMailingListEnglish@forum.nginx.org> Thanks for your reply Maxim Dounin So something like this ? : Main nginx conf: http { geo $limited { default 1; 192.168.45.56/32 0; 199.27.128.0/21 0; 173.245.48.0/20 0; 103.21.244.0/22 0; 103.22.200.0/22 0; 103.31.4.0/22 0; 141.101.64.0/18 0; 108.162.192.0/18 0; 190.93.240.0/20 0; 188.114.96.0/20 0; 197.234.240.0/22 0; 198.41.128.0/17 0; 162.158.0.0/15 0; 104.16.0.0/12 0; } map $limited $limit { 1 $binary_remote_addr; 0 ""; } And this on the domain config? : server { limit_req_zone $limit zone=foo:1m rate=10r/m; limit_req zone=foo burst=5; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255785#msg-255785 From nginx-forum at nginx.us Thu Dec 25 13:02:44 2014 From: nginx-forum at nginx.us (ThomasLohner) Date: Thu, 25 Dec 2014 08:02:44 -0500 Subject: Multiple nginx instances share same proxy cache storage In-Reply-To: <20140811101831.GI1849@mdounin.ru> References: <20140811101831.GI1849@mdounin.ru> Message-ID: <1cd3b212772cf00adb5f37e3d01c9ca0.NginxMailingListEnglish@forum.nginx.org> Hi, > Hello! > > On Sun, Aug 10, 2014 at 05:24:04PM -0700, Robert Paprocki wrote: > > > Any options then to support an architecture with multiple nginx > > nodes sharing or distributing a proxy cache between them? i.e., > > a HAProxy machine load balances to several nginx nodes (for > > failover reasons), and each of these nodes handles http proxy + > > proxy cache for a remote origin? If nginx handles cache info in > > memory, it seems that multiple instances could not be used to > > maintain the same cache info (something like rsyncing the cache > > contents between nodes thus would not work); are there any > > recommendations to achieve such a solution? > > Distinct caches will be best from failover point of view. > > To maximize cache effeciency, you may consider using URI-based > hashing to distribute requests between cache nodes. > > -- > Maxim Dounin > http://nginx.org/ I wonder if it would hurt to make nginx load cache metadata from file as a fallback only if there's no entry in the keys_zone. If this would be a param for proxy_cache_path we could build a distributed cache cluster by simply copying cache files to other nodes. Making this a param would not hut performance if you don't want this behavior. The functionality is already there, because nginx loads metadata from files on startup. Is this a valid feature request or does no one care aboout clustering nginx caches? -- Thomas Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252275,255788#msg-255788 From mdounin at mdounin.ru Thu Dec 25 13:12:51 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 25 Dec 2014 16:12:51 +0300 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <3dcb93230253e3a6eef53ac7d21b4cac.NginxMailingListEnglish@forum.nginx.org> References: <20141224143238.GI79300@mdounin.ru> <3dcb93230253e3a6eef53ac7d21b4cac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141225131251.GL79300@mdounin.ru> Hello! On Wed, Dec 24, 2014 at 02:36:50PM -0500, ASTRAPI wrote: > Thanks for your reply Maxim Dounin > > So something like this ? : > > Main nginx conf: > > http { > > geo $limited { > default 1; > 192.168.45.56/32 0; > 199.27.128.0/21 0; > 173.245.48.0/20 0; > 103.21.244.0/22 0; > 103.22.200.0/22 0; > 103.31.4.0/22 0; > 141.101.64.0/18 0; > 108.162.192.0/18 0; > 190.93.240.0/20 0; > 188.114.96.0/20 0; > 197.234.240.0/22 0; > 198.41.128.0/17 0; > 162.158.0.0/15 0; > 104.16.0.0/12 0; > } > > map $limited $limit { > 1 $binary_remote_addr; > 0 ""; > } > > > And this on the domain config? : > > server { > > limit_req_zone $limit zone=foo:1m rate=10r/m; > limit_req zone=foo burst=5; The limit_req_zone can be used only at http{} level, so you'll have to move it to http{} block, see here for docs: http://nginx.org/r/limit_req_zone The limit_req directive can be used at http, server, or location level. It's up to your specific setup requirements where to use it. In many cases it's good idea to protect only expensive resources like proxying to backends. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Dec 25 18:32:47 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Thu, 25 Dec 2014 13:32:47 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <20141225131251.GL79300@mdounin.ru> References: <20141225131251.GL79300@mdounin.ru> Message-ID: <23f9261997f37352cc0152ed1cfd3bd6.NginxMailingListEnglish@forum.nginx.org> My english is not so good and sometimes is hard for me sorry :( So as the goal is to limit globaly the maximum connections from one ip to 15 and to have 40 requests per ip and burst up to 80 requests per second it should be like this? Main nginx conf: http { limit_req_zone $limit zone=foo:1m rate=40r/s; geo $limited { default 1; 192.168.45.56/32 0; 199.27.128.0/21 0; 173.245.48.0/20 0; 103.21.244.0/22 0; 103.22.200.0/22 0; 103.31.4.0/22 0; 141.101.64.0/18 0; 108.162.192.0/18 0; 190.93.240.0/20 0; 188.114.96.0/20 0; 197.234.240.0/22 0; 198.41.128.0/17 0; 162.158.0.0/15 0; 104.16.0.0/12 0; } map $limited $limit { 1 $binary_remote_addr; 0 ""; } And on domain conf: server { limit_req zone=foo burst=80; If not can you please post a configuration that will do that? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255794#msg-255794 From nginx-forum at nginx.us Fri Dec 26 14:56:58 2014 From: nginx-forum at nginx.us (khav) Date: Fri, 26 Dec 2014 09:56:58 -0500 Subject: OCSP_check_validity() status expired In-Reply-To: References: Message-ID: <8ab0d8e226389427fa449b77811ddc4b.NginxMailingListEnglish@forum.nginx.org> @ionsec These lines are already in my config but i add the "valid=300s" to the resolver line @Maxim how can i fix it Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255769,255802#msg-255802 From nginx-forum at nginx.us Fri Dec 26 15:04:27 2014 From: nginx-forum at nginx.us (khav) Date: Fri, 26 Dec 2014 10:04:27 -0500 Subject: OCSP_check_validity() status expired In-Reply-To: <8ab0d8e226389427fa449b77811ddc4b.NginxMailingListEnglish@forum.nginx.org> References: <8ab0d8e226389427fa449b77811ddc4b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4e65347ac885aff268345d6a8c50290b.NginxMailingListEnglish@forum.nginx.org> the "date" command give the following on my server so i think the date is ok (correct me if i am wrong) Fri Dec 26 14:58:27 MST 2014 In php.ini date.timezone = "US/Mountain" [root at sv1 ~]# cat /etc/sysconfig/clock ZONE="US/Mountain" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255769,255803#msg-255803 From mdounin at mdounin.ru Fri Dec 26 17:52:20 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 26 Dec 2014 20:52:20 +0300 Subject: OCSP_check_validity() status expired In-Reply-To: <4e65347ac885aff268345d6a8c50290b.NginxMailingListEnglish@forum.nginx.org> References: <8ab0d8e226389427fa449b77811ddc4b.NginxMailingListEnglish@forum.nginx.org> <4e65347ac885aff268345d6a8c50290b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141226175220.GU79300@mdounin.ru> Hello! On Fri, Dec 26, 2014 at 10:04:27AM -0500, khav wrote: > the "date" command give the following on my server so i think the date is ok > (correct me if i am wrong) > > Fri Dec 26 14:58:27 MST 2014 > > In php.ini > date.timezone = "US/Mountain" > > [root at sv1 ~]# cat /etc/sysconfig/clock > ZONE="US/Mountain" Doesn't looks correct for me, current time is 17:20 UTC: $ date Fri Dec 26 17:20:30 UTC 2014 and this corresponds to 10:20 MST: $ env TZ="US/Mountain" date Fri Dec 26 10:20:35 MST 2014 That is, looks like the time on your server is wrong. See your server documentation to find out how to sync time properly. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sat Dec 27 10:52:32 2014 From: nginx-forum at nginx.us (Claros) Date: Sat, 27 Dec 2014 05:52:32 -0500 Subject: Having multiple Symfony2 apps on same domain Message-ID: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> Hello everybody ! I just switched from Apache2 to Nginx and I met some issues having the same configuration. What I want to do is having multiple Symfony2 apps on the same domain name. Each app will have a subdirectory and a main app will be on the domain name itself. For instance : http://mydomain/ -> main app http://mydomain/subdir1 -> another app http://mydomain/subdir2 -> yet another app One of Symfony2 feature is to have only three php files to be executed, and all the URL are rewritten to those files. You can found basic configuration for Symfony2 at this address if you need more information : http://wiki.nginx.org/Symfony Now after many hours of configuration, with the help of debug logs, I almost did it. This is my current configuration : server { listen 80; server_name mydomain; root /server/www/main-app/web; location @rewriteapp { rewrite ^(.*)$ /app.php/$1 last; } location /subdir1/ { # alias /server/www/other-app1/web; set $root "/server/www/other-app1/web"; # try to serve file directly, fallback to app.php try_files $uri @rewriteapp; } location / { index app.php; set $root "/server/www/main-app/web"; # try to serve file directly, fallback to app.php try_files $uri @rewriteapp; } # PROD location ~ ^/app\.php(/|$) { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $root$fastcgi_script_name; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; } } Why did I create a variable "$root" ? Because when I was using the root (or alias) directive in a location block and the variable $document_root, I found out that this variable has as final value (in the location app.php) the first root directive in the server or the default root location. With this configuration, it almost work. The main app works and the subdirectories are correctly sent to their directory. The last problem is that the URI processed by the file app.php also contains the subdirectory in it, so the others apps send 404 for all the URL. I tried to fix that by changing "REQUEST_URI" parameter, but with that the app.php generate wrong URL without the subdirectory. So is their a way to achieve this configuration ? Thanks you ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255806,255806#msg-255806 From steve at greengecko.co.nz Sun Dec 28 19:30:56 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 29 Dec 2014 08:30:56 +1300 Subject: Having multiple Symfony2 apps on same domain In-Reply-To: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> References: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1419795056.29156.29.camel@steve-new> On Sat, 2014-12-27 at 05:52 -0500, Claros wrote: > Hello everybody ! > > I just switched from Apache2 to Nginx and I met some issues having the same > configuration. What I want to do is having multiple Symfony2 apps on the > same domain name. Each app will have a subdirectory and a main app will be > on the domain name itself. For instance : > http://mydomain/ -> main app > http://mydomain/subdir1 -> another app > http://mydomain/subdir2 -> yet another app > One of Symfony2 feature is to have only three php files to be executed, and > all the URL are rewritten to those files. You can found basic configuration > for Symfony2 at this address if you need more information : > http://wiki.nginx.org/Symfony > Now after many hours of configuration, with the help of debug logs, I almost > did it. This is my current configuration : > > server { > listen 80; > server_name mydomain; > root /server/www/main-app/web; > > location @rewriteapp { > rewrite ^(.*)$ /app.php/$1 last; > } > > location /subdir1/ { > # alias /server/www/other-app1/web; > set $root "/server/www/other-app1/web"; > # try to serve file directly, fallback to app.php > try_files $uri @rewriteapp; > } > > location / { > index app.php; > set $root "/server/www/main-app/web"; > # try to serve file directly, fallback to app.php > try_files $uri @rewriteapp; > } > > # PROD > location ~ ^/app\.php(/|$) { > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_split_path_info ^(.+\.php)(/.*)$; > > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $root$fastcgi_script_name; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > } > } > > Why did I create a variable "$root" ? Because when I was using the root (or > alias) directive in a location block and the variable $document_root, I > found out that this variable has as final value (in the location app.php) > the first root directive in the server or the default root location. > With this configuration, it almost work. The main app works and the > subdirectories are correctly sent to their directory. The last problem is > that the URI processed by the file app.php also contains the subdirectory in > it, so the others apps send 404 for all the URL. I tried to fix that by > changing "REQUEST_URI" parameter, but with that the app.php generate wrong > URL without the subdirectory. > > So is their a way to achieve this configuration ? Thanks you ! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255806,255806#msg-255806 Try using a map to set the $root... Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From naji.demolitionman at gmail.com Sun Dec 28 21:16:31 2014 From: naji.demolitionman at gmail.com (Naji Astier) Date: Sun, 28 Dec 2014 22:16:31 +0100 Subject: Having multiple Symfony2 apps on same domain In-Reply-To: <1419795056.29156.29.camel@steve-new> References: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> <1419795056.29156.29.camel@steve-new> Message-ID: <54A0732F.2000909@gmail.com> Le 28/12/2014 20:30, Steve Holdoway a ?crit : > On Sat, 2014-12-27 at 05:52 -0500, Claros wrote: >> Hello everybody ! >> >> I just switched from Apache2 to Nginx and I met some issues having the same >> configuration. What I want to do is having multiple Symfony2 apps on the >> same domain name. Each app will have a subdirectory and a main app will be >> on the domain name itself. For instance : >> http://mydomain/ -> main app >> http://mydomain/subdir1 -> another app >> http://mydomain/subdir2 -> yet another app >> One of Symfony2 feature is to have only three php files to be executed, and >> all the URL are rewritten to those files. You can found basic configuration >> for Symfony2 at this address if you need more information : >> http://wiki.nginx.org/Symfony >> Now after many hours of configuration, with the help of debug logs, I almost >> did it. This is my current configuration : >> >> server { >> listen 80; >> server_name mydomain; >> root /server/www/main-app/web; >> >> location @rewriteapp { >> rewrite ^(.*)$ /app.php/$1 last; >> } >> >> location /subdir1/ { >> # alias /server/www/other-app1/web; >> set $root "/server/www/other-app1/web"; >> # try to serve file directly, fallback to app.php >> try_files $uri @rewriteapp; >> } >> >> location / { >> index app.php; >> set $root "/server/www/main-app/web"; >> # try to serve file directly, fallback to app.php >> try_files $uri @rewriteapp; >> } >> >> # PROD >> location ~ ^/app\.php(/|$) { >> fastcgi_pass unix:/var/run/php5-fpm.sock; >> fastcgi_split_path_info ^(.+\.php)(/.*)$; >> >> include fastcgi_params; >> fastcgi_param SCRIPT_FILENAME $root$fastcgi_script_name; >> fastcgi_param SCRIPT_NAME $fastcgi_script_name; >> fastcgi_param PATH_INFO $fastcgi_path_info; >> } >> } >> >> Why did I create a variable "$root" ? Because when I was using the root (or >> alias) directive in a location block and the variable $document_root, I >> found out that this variable has as final value (in the location app.php) >> the first root directive in the server or the default root location. >> With this configuration, it almost work. The main app works and the >> subdirectories are correctly sent to their directory. The last problem is >> that the URI processed by the file app.php also contains the subdirectory in >> it, so the others apps send 404 for all the URL. I tried to fix that by >> changing "REQUEST_URI" parameter, but with that the app.php generate wrong >> URL without the subdirectory. >> >> So is their a way to achieve this configuration ? Thanks you ! >> >> Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255806,255806#msg-255806 > Try using a map to set the $root... > > Steve > Ok I did not know the map system, it is interesting. But it is only simplifying my configuration, not solving the problem. From reallfqq-nginx at yahoo.fr Sun Dec 28 23:03:12 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Dec 2014 00:03:12 +0100 Subject: Having multiple Symfony2 apps on same domain In-Reply-To: <54A0732F.2000909@gmail.com> References: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> <1419795056.29156.29.camel@steve-new> <54A0732F.2000909@gmail.com> Message-ID: You are using the same named location as the fallback of the try_files directive, although you are dealing with three different paths. Why do not you use one fallback named location per app location, each rewriting to the correct path? --- *B. R.* On Sun, Dec 28, 2014 at 10:16 PM, Naji Astier wrote: > Le 28/12/2014 20:30, Steve Holdoway a ?crit : > > On Sat, 2014-12-27 at 05:52 -0500, Claros wrote: >> >>> Hello everybody ! >>> >>> I just switched from Apache2 to Nginx and I met some issues having the >>> same >>> configuration. What I want to do is having multiple Symfony2 apps on the >>> same domain name. Each app will have a subdirectory and a main app will >>> be >>> on the domain name itself. For instance : >>> http://mydomain/ -> main app >>> http://mydomain/subdir1 -> another app >>> http://mydomain/subdir2 -> yet another app >>> One of Symfony2 feature is to have only three php files to be executed, >>> and >>> all the URL are rewritten to those files. You can found basic >>> configuration >>> for Symfony2 at this address if you need more information : >>> http://wiki.nginx.org/Symfony >>> Now after many hours of configuration, with the help of debug logs, I >>> almost >>> did it. This is my current configuration : >>> >>> server { >>> listen 80; >>> server_name mydomain; >>> root /server/www/main-app/web; >>> >>> location @rewriteapp { >>> rewrite ^(.*)$ /app.php/$1 last; >>> } >>> >>> location /subdir1/ { >>> # alias /server/www/other-app1/web; >>> set $root "/server/www/other-app1/web"; >>> # try to serve file directly, fallback to app.php >>> try_files $uri @rewriteapp; >>> } >>> >>> location / { >>> index app.php; >>> set $root "/server/www/main-app/web"; >>> # try to serve file directly, fallback to app.php >>> try_files $uri @rewriteapp; >>> } >>> >>> # PROD >>> location ~ ^/app\.php(/|$) { >>> fastcgi_pass unix:/var/run/php5-fpm.sock; >>> fastcgi_split_path_info ^(.+\.php)(/.*)$; >>> >>> include fastcgi_params; >>> fastcgi_param SCRIPT_FILENAME $root$fastcgi_script_name; >>> fastcgi_param SCRIPT_NAME $fastcgi_script_name; >>> fastcgi_param PATH_INFO $fastcgi_path_info; >>> } >>> } >>> >>> Why did I create a variable "$root" ? Because when I was using the root >>> (or >>> alias) directive in a location block and the variable $document_root, I >>> found out that this variable has as final value (in the location app.php) >>> the first root directive in the server or the default root location. >>> With this configuration, it almost work. The main app works and the >>> subdirectories are correctly sent to their directory. The last problem is >>> that the URI processed by the file app.php also contains the >>> subdirectory in >>> it, so the others apps send 404 for all the URL. I tried to fix that by >>> changing "REQUEST_URI" parameter, but with that the app.php generate >>> wrong >>> URL without the subdirectory. >>> >>> So is their a way to achieve this configuration ? Thanks you ! >>> >>> Posted at Nginx Forum: http://forum.nginx.org/read. >>> php?2,255806,255806#msg-255806 >>> >> Try using a map to set the $root... >> >> Steve >> >> Ok I did not know the map system, it is interesting. But it is only > simplifying my configuration, not solving the problem. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 05:25:01 2014 From: nginx-forum at nginx.us (khav) Date: Mon, 29 Dec 2014 00:25:01 -0500 Subject: OCSP_check_validity() status expired In-Reply-To: References: Message-ID: I confirm that the issue is resolved and what indeed the time.It was not properly sync Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255769,255813#msg-255813 From nginx-forum at nginx.us Mon Dec 29 07:36:55 2014 From: nginx-forum at nginx.us (shmulik) Date: Mon, 29 Dec 2014 02:36:55 -0500 Subject: How Nginx behaves with "proxy_bind" and DNS resolver with non matching ip versions between bind ip and resolved ip? Message-ID: <8ba4f8d316f0fb59f718e71c8ba5b5f9.NginxMailingListEnglish@forum.nginx.org> Hello, I'm working with the proxy module, and with a dns resolver configured. The traffic i'm using is both ipv4 and ipv6. I'm trying to understand Nginx behavior when using "proxy_bind" directive and when the resolver returns both ipv4 and ipv6 addresses. In particular i'd like to understand what happens when: 1. "proxy_bind" binds to an ipv6 address, and the resolver returns only ipv4 addresses (and the other way around - binding to ipv4, resolving only to ipv6). 2. "proxy_bind" binds to an ipv6 address, the resolver returns both ipv4 and ipv6 addresses, but the first attempted ip address is an ipv4 address (and the other way around - binding to ipv4, first attempted is ipv6). Can you please shed some light on this? Thanks, Shmulik Bibi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255814,255814#msg-255814 From nginx-forum at nginx.us Mon Dec 29 07:46:01 2014 From: nginx-forum at nginx.us (weheartwebsites) Date: Mon, 29 Dec 2014 02:46:01 -0500 Subject: 1.7.9 does not compile anymore with libressl Message-ID: <9c81c0def89cea09b9a0201b2249628d.NginxMailingListEnglish@forum.nginx.org> I am trying to compile nginx 1.7.9 with libressl 2.1.2 the same way I did with 1.7.7 but get the following error: src/http/ngx_http_request.c: In function ?ngx_http_ssl_handshake_handler?: src/http/ngx_http_request.c:775:9: error: implicit declaration of function ?SSL_get0_alpn_selected? [-Werror=implicit-function-declaration] cc1: all warnings being treated as errors make[1]: *** [objs/src/http/ngx_http_request.o] Error 1 make[1]: Leaving directory `/usr/src/nginx-1.7.9' make: *** [build] Error 2 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255815,255815#msg-255815 From luky-37 at hotmail.com Mon Dec 29 08:32:45 2014 From: luky-37 at hotmail.com (Lukas Tribus) Date: Mon, 29 Dec 2014 09:32:45 +0100 Subject: 1.7.9 does not compile anymore with libressl In-Reply-To: <9c81c0def89cea09b9a0201b2249628d.NginxMailingListEnglish@forum.nginx.org> References: <9c81c0def89cea09b9a0201b2249628d.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I am trying to compile nginx 1.7.9 with libressl 2.1.2 the same way I did > with 1.7.7 but get the following error: > > src/http/ngx_http_request.c: In function ?ngx_http_ssl_handshake_handler?: > src/http/ngx_http_request.c:775:9: error: implicit declaration of function > ?SSL_get0_alpn_selected? [-Werror=implicit-function-declaration] > cc1: all warnings being treated as errors > make[1]: *** [objs/src/http/ngx_http_request.o] Error 1 > make[1]: Leaving directory `/usr/src/nginx-1.7.9' > make: *** [build] Error 2 Wait for the next libressl release or patch libressl by removing the TLSEXT_TYPE_application_layer_protocol_negotiation symbol definition: http://pastebin.com/raw.php?i=ZQ5peJvL Lukas From nginx-forum at nginx.us Mon Dec 29 10:43:18 2014 From: nginx-forum at nginx.us (weheartwebsites) Date: Mon, 29 Dec 2014 05:43:18 -0500 Subject: 1.7.9 does not compile anymore with libressl In-Reply-To: References: Message-ID: <4a0591ec01734272aa96e453a3dff551.NginxMailingListEnglish@forum.nginx.org> thanks for that! Confirmed this is not an nginx issue but libressl 2.1.2. See also: https://github.com/libressl-portable/portable/issues/50 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255815,255820#msg-255820 From edigarov at qarea.com Mon Dec 29 11:04:06 2014 From: edigarov at qarea.com (Gregory Edigarov) Date: Mon, 29 Dec 2014 13:04:06 +0200 Subject: nginx removes double slashes Message-ID: <54A13526.2050604@qarea.com> Hello everybody, perhaps I am doing something wrong: location /njs/ { rewrite /njs/(.*)$ /$1 break; proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_pass http://localhost:4006; } calling this http://[hostname]/facebook/bitly/http%3A%2F%2F[hostname]%2Fbeth-buczynski%2Fdiy-ways-to-stay-warm-in-winter%2F and I definitely see that to my application it comes like: http:/[hostname]/beth-buczynski/diy-ways-to-stay-warm-in-winter/ note the single '/', when I need '//' Is there any way to handle it? -- With best regards, Gregory Edigarov From edigarov at qarea.com Mon Dec 29 12:14:21 2014 From: edigarov at qarea.com (Gregory Edigarov) Date: Mon, 29 Dec 2014 14:14:21 +0200 Subject: nginx removes double slashes In-Reply-To: <54A13526.2050604@qarea.com> References: <54A13526.2050604@qarea.com> Message-ID: <54A1459D.9080601@qarea.com> On 12/29/2014 01:04 PM, Gregory Edigarov wrote: > Hello everybody, > > perhaps I am doing something wrong: > > location /njs/ { > rewrite /njs/(.*)$ /$1 break; > proxy_redirect off; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_pass http://localhost:4006; > > } > calling this > http://[hostname]/facebook/bitly/http%3A%2F%2F[hostname]%2Fbeth-buczynski%2Fdiy-ways-to-stay-warm-in-winter%2F > > > and I definitely see that to my application it comes like: > http:/[hostname]/beth-buczynski/diy-ways-to-stay-warm-in-winter/ > note the single '/', when I need '//' > > Is there any way to handle it? Also, if I call my application directly, not via nginx - that works correctly. From nginx-forum at nginx.us Mon Dec 29 12:23:42 2014 From: nginx-forum at nginx.us (rcrahul01) Date: Mon, 29 Dec 2014 07:23:42 -0500 Subject: nginx performance degradation over ssl Message-ID: <6b6175594e276f3f727347db2d67eb79.NginxMailingListEnglish@forum.nginx.org> Hi, I was trying to perform some load test to check performance degradation while switching to https from http, but not really able to conclude anything here. yeah, i have checked the details at http://nginx.org/en/docs/http/configuring_https_servers.html for performance tuning. Could please help me to get an rough idea on performance degradation (like in terms of percentage.) with https and persistence connection enabled. Any help from your side will be appreciated. Thanks, Rahul Choudhury Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255824,255824#msg-255824 From rcrahul01 at gmail.com Mon Dec 29 12:26:06 2014 From: rcrahul01 at gmail.com (Rahul Choudhary) Date: Mon, 29 Dec 2014 17:56:06 +0530 Subject: Fwd: Delivery Status Notification (Failure) In-Reply-To: <001a11339006139dcf050b58ea06@google.com> References: <001a11339006139dcf050b58ea06@google.com> Message-ID: Hi, I was trying to perform some load test to check performance degradation while switching to https from http, but not really able to conclude anything here. yeah, i have checked the details at http://nginx.org/en/docs/http/configuring_https_servers.html for performance tuning. Could please help me to get an rough idea on performance degradation (like in terms of percentage.) with https and persistence connection enabled. Any help from your side will be appreciated. Thanks, Rahul Choudhury -- regards, Rahul Choudhury -------------- next part -------------- An HTML attachment was scrubbed... URL: From naji.demolitionman at gmail.com Mon Dec 29 14:52:20 2014 From: naji.demolitionman at gmail.com (Naji Astier) Date: Mon, 29 Dec 2014 15:52:20 +0100 Subject: Having multiple Symfony2 apps on same domain In-Reply-To: References: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> <1419795056.29156.29.camel@steve-new> <54A0732F.2000909@gmail.com> Message-ID: <54A16AA4.2080205@gmail.com> Thanks you for your answer, you helped me to make it working. ;) This is my final configuration : server { listen 80; server_name mydomain; root /server/www; location @rewriteMainApp { rewrite ^(.*)$ /app.php/$1 last; } location @rewriteOtherApp1 { rewrite ^(.*)$ /subdir1/app.php/$1 last; } location /subdir1 { alias /server/www/other-app1/web; index app.php; set $subfolder "other-app1/web"; try_files $uri @rewriteOtherApp1; } location / { root /server/www/main-app/web; index app.php; set $subfolder "main-app/web"; try_files $uri @rewriteMainApp; } # PROD location ~ /app\.php(/|$) { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/$subfolder/app.php; } } Le 29/12/2014 00:03, B.R. a ?crit : > You are using the same named location as the fallback of the try_files > directive, although you are dealing with three different paths. > > Why do not you use one fallback named location per app location, each > rewriting to the correct path? > --- > *B. R.* > > On Sun, Dec 28, 2014 at 10:16 PM, Naji Astier > > > wrote: > > Le 28/12/2014 20:30, Steve Holdoway a ?crit : > > On Sat, 2014-12-27 at 05:52 -0500, Claros wrote: > > Hello everybody ! > > I just switched from Apache2 to Nginx and I met some > issues having the same > configuration. What I want to do is having multiple > Symfony2 apps on the > same domain name. Each app will have a subdirectory and a > main app will be > on the domain name itself. For instance : > http://mydomain/ -> main app > http://mydomain/subdir1 -> another app > http://mydomain/subdir2 -> yet another app > One of Symfony2 feature is to have only three php files to > be executed, and > all the URL are rewritten to those files. You can found > basic configuration > for Symfony2 at this address if you need more information : > http://wiki.nginx.org/Symfony > Now after many hours of configuration, with the help of > debug logs, I almost > did it. This is my current configuration : > > server { > listen 80; > server_name mydomain; > root /server/www/main-app/web; > > location @rewriteapp { > rewrite ^(.*)$ /app.php/$1 last; > } > > location /subdir1/ { > # alias /server/www/other-app1/web; > set $root "/server/www/other-app1/web"; > # try to serve file directly, fallback to app.php > try_files $uri @rewriteapp; > } > > location / { > index app.php; > set $root "/server/www/main-app/web"; > # try to serve file directly, fallback to app.php > try_files $uri @rewriteapp; > } > > # PROD > location ~ ^/app\.php(/|$) { > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_split_path_info ^(.+\.php)(/.*)$; > > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME > $root$fastcgi_script_name; > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param PATH_INFO $fastcgi_path_info; > } > } > > Why did I create a variable "$root" ? Because when I was > using the root (or > alias) directive in a location block and the variable > $document_root, I > found out that this variable has as final value (in the > location app.php) > the first root directive in the server or the default root > location. > With this configuration, it almost work. The main app > works and the > subdirectories are correctly sent to their directory. The > last problem is > that the URI processed by the file app.php also contains > the subdirectory in > it, so the others apps send 404 for all the URL. I tried > to fix that by > changing "REQUEST_URI" parameter, but with that the > app.php generate wrong > URL without the subdirectory. > > So is their a way to achieve this configuration ? Thanks you ! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255806,255806#msg-255806 > > Try using a map to set the $root... > > Steve > > Ok I did not know the map system, it is interesting. But it is > only simplifying my configuration, not solving the problem. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Dec 29 15:45:34 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Dec 2014 16:45:34 +0100 Subject: nginx removes double slashes In-Reply-To: <54A1459D.9080601@qarea.com> References: <54A13526.2050604@qarea.com> <54A1459D.9080601@qarea.com> Message-ID: The proxy_pass documentation is a bit unclear to me, but I it is a fact that double slashes are replaced by single ones as the result of the normalization of the URI. However, I do not get the following parts: "If proxy_pass is specified without a URI, the request URI is passed to the server in the same form as sent by a client when the original request is processed, or the full normalized request URI is passed when processing the changed URI: " and "Before version 1.1.12, if proxy_pass is specified without a URI, the original request URI might be passed instead of the changed URI in some cases." Calling out to nginx pros: When is the original request processed? When is it not? --- *B. R.* On Mon, Dec 29, 2014 at 1:14 PM, Gregory Edigarov wrote: > > On 12/29/2014 01:04 PM, Gregory Edigarov wrote: > >> Hello everybody, >> >> perhaps I am doing something wrong: >> >> location /njs/ { >> rewrite /njs/(.*)$ /$1 break; >> proxy_redirect off; >> proxy_set_header Host $http_host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For >> $proxy_add_x_forwarded_for; >> proxy_set_header X-Forwarded-Proto $scheme; >> proxy_pass http://localhost:4006; >> >> } >> calling this >> http://[hostname]/facebook/bitly/http%3A%2F%2F[hostname]% >> 2Fbeth-buczynski%2Fdiy-ways-to-stay-warm-in-winter%2F >> >> and I definitely see that to my application it comes like: >> http:/[hostname]/beth-buczynski/diy-ways-to-stay-warm-in-winter/ >> note the single '/', when I need '//' >> >> Is there any way to handle it? >> > Also, if I call my application directly, not via nginx - that works > correctly. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Dec 29 15:47:51 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Dec 2014 16:47:51 +0100 Subject: Having multiple Symfony2 apps on same domain In-Reply-To: <54A16AA4.2080205@gmail.com> References: <399659bf5ad7be5ad0ae48d341f3285d.NginxMailingListEnglish@forum.nginx.org> <1419795056.29156.29.camel@steve-new> <54A0732F.2000909@gmail.com> <54A16AA4.2080205@gmail.com> Message-ID: Glad I helped ! :o) --- *B. R.* On Mon, Dec 29, 2014 at 3:52 PM, Naji Astier wrote: > Thanks you for your answer, you helped me to make it working. ;) > This is my final configuration : > > server { listen 80; > server_name mydomain; > root /server/www; location @rewriteMainApp { rewrite ^(.*)$ /app.php/$1 last; } location @rewriteOtherApp1 { rewrite ^(.*)$ /subdir1/app.php/$1 last; } location /subdir1 { alias /server/www/other-app1/web; index app.php; set $subfolder "other-app1/web"; try_files $uri @rewriteOtherApp1; } location / { root /server/www/main-app/web; index app.php; set $subfolder "main-app/web"; try_files $uri @rewriteMainApp; } # PROD location ~ /app\.php(/|$) { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/$subfolder/app.php; }} > > > > Le 29/12/2014 00:03, B.R. a ?crit : > > You are using the same named location as the fallback of the try_files > directive, although you are dealing with three different paths. > > Why do not you use one fallback named location per app location, each > rewriting to the correct path? > --- > *B. R.* > > On Sun, Dec 28, 2014 at 10:16 PM, Naji Astier < > naji.demolitionman at gmail.com> wrote: > >> Le 28/12/2014 20:30, Steve Holdoway a ?crit : >> >> On Sat, 2014-12-27 at 05:52 -0500, Claros wrote: >>> >>>> Hello everybody ! >>>> >>>> I just switched from Apache2 to Nginx and I met some issues having the >>>> same >>>> configuration. What I want to do is having multiple Symfony2 apps on the >>>> same domain name. Each app will have a subdirectory and a main app will >>>> be >>>> on the domain name itself. For instance : >>>> http://mydomain/ -> main app >>>> http://mydomain/subdir1 -> another app >>>> http://mydomain/subdir2 -> yet another app >>>> One of Symfony2 feature is to have only three php files to be executed, >>>> and >>>> all the URL are rewritten to those files. You can found basic >>>> configuration >>>> for Symfony2 at this address if you need more information : >>>> http://wiki.nginx.org/Symfony >>>> Now after many hours of configuration, with the help of debug logs, I >>>> almost >>>> did it. This is my current configuration : >>>> >>>> server { >>>> listen 80; >>>> server_name mydomain; >>>> root /server/www/main-app/web; >>>> >>>> location @rewriteapp { >>>> rewrite ^(.*)$ /app.php/$1 last; >>>> } >>>> >>>> location /subdir1/ { >>>> # alias /server/www/other-app1/web; >>>> set $root "/server/www/other-app1/web"; >>>> # try to serve file directly, fallback to app.php >>>> try_files $uri @rewriteapp; >>>> } >>>> >>>> location / { >>>> index app.php; >>>> set $root "/server/www/main-app/web"; >>>> # try to serve file directly, fallback to app.php >>>> try_files $uri @rewriteapp; >>>> } >>>> >>>> # PROD >>>> location ~ ^/app\.php(/|$) { >>>> fastcgi_pass unix:/var/run/php5-fpm.sock; >>>> fastcgi_split_path_info ^(.+\.php)(/.*)$; >>>> >>>> include fastcgi_params; >>>> fastcgi_param SCRIPT_FILENAME $root$fastcgi_script_name; >>>> fastcgi_param SCRIPT_NAME $fastcgi_script_name; >>>> fastcgi_param PATH_INFO $fastcgi_path_info; >>>> } >>>> } >>>> >>>> Why did I create a variable "$root" ? Because when I was using the root >>>> (or >>>> alias) directive in a location block and the variable $document_root, I >>>> found out that this variable has as final value (in the location >>>> app.php) >>>> the first root directive in the server or the default root location. >>>> With this configuration, it almost work. The main app works and the >>>> subdirectories are correctly sent to their directory. The last problem >>>> is >>>> that the URI processed by the file app.php also contains the >>>> subdirectory in >>>> it, so the others apps send 404 for all the URL. I tried to fix that by >>>> changing "REQUEST_URI" parameter, but with that the app.php generate >>>> wrong >>>> URL without the subdirectory. >>>> >>>> So is their a way to achieve this configuration ? Thanks you ! >>>> >>>> Posted at Nginx Forum: >>>> http://forum.nginx.org/read.php?2,255806,255806#msg-255806 >>>> >>> Try using a map to set the $root... >>> >>> Steve >>> >>> Ok I did not know the map system, it is interesting. But it is only >> simplifying my configuration, not solving the problem. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 29 16:00:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Dec 2014 19:00:28 +0300 Subject: nginx removes double slashes In-Reply-To: <54A13526.2050604@qarea.com> References: <54A13526.2050604@qarea.com> Message-ID: <20141229160028.GA3656@mdounin.ru> Hello! On Mon, Dec 29, 2014 at 01:04:06PM +0200, Gregory Edigarov wrote: > Hello everybody, > > perhaps I am doing something wrong: > > location /njs/ { > rewrite /njs/(.*)$ /$1 break; > proxy_redirect off; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_pass http://localhost:4006; > > } > calling this > http://[hostname]/facebook/bitly/http%3A%2F%2F[hostname]%2Fbeth-buczynski%2Fdiy-ways-to-stay-warm-in-winter%2F > > and I definitely see that to my application it comes like: > http:/[hostname]/beth-buczynski/diy-ways-to-stay-warm-in-winter/ > note the single '/', when I need '//' > > Is there any way to handle it? http://nginx.org/r/merge_slashes -- Maxim Dounin http://nginx.org/ From netplus.root at gmail.com Mon Dec 29 16:26:51 2014 From: netplus.root at gmail.com (Equipe R&S Netplus) Date: Mon, 29 Dec 2014 17:26:51 +0100 Subject: Header SSL client certificate Message-ID: Hello, I use nginx as a reverse-proxy. I would like to set a header, more precisely a header that contain the SSL client certificate. However, the variable '$ssl_client_cert' add some character that I don't want (like tab characters) << proxy_set_header X-SSL-CLI-CERT $ssl_client_cert; >> I test with '$ssl_client_raw_cert', but the webserver in backend (here apache) doesn't understand the certificate and return this : << request failed: error reading the headers >> I see a previous post mentionning a workarount with 'map' ( http://forum.nginx.org/read.php?2,236546,236546) : << map $ssl_client_raw_cert $a { "~^(-.*-\n)(?<1st>[^\n]+)\n((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?

[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?(-.*-)$" $1st; } >> But in debug log file of nginx, I have an error : << [alert] 19820#0: *21 pcre_exec() failed: -8 on " ... CERTIFICATE CONTENT ... " using "^(-.*- )(?<1st>[^ ... >> I'm using nginx version 1.6.2, do you know another workaround please ? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Dec 29 16:48:44 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 29 Dec 2014 19:48:44 +0300 Subject: How Nginx behaves with "proxy_bind" and DNS resolver with non matching ip versions between bind ip and resolved ip? In-Reply-To: <8ba4f8d316f0fb59f718e71c8ba5b5f9.NginxMailingListEnglish@forum.nginx.org> References: <8ba4f8d316f0fb59f718e71c8ba5b5f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20141229164844.GD3656@mdounin.ru> Hello! On Mon, Dec 29, 2014 at 02:36:55AM -0500, shmulik wrote: > Hello, > I'm working with the proxy module, and with a dns resolver configured. The > traffic i'm using is both ipv4 and ipv6. > > I'm trying to understand Nginx behavior when using "proxy_bind" directive > and when the resolver returns both ipv4 and ipv6 addresses. > > In particular i'd like to understand what happens when: > > 1. "proxy_bind" binds to an ipv6 address, and the resolver returns only ipv4 > addresses (and the other way around - binding to ipv4, resolving only to > ipv6). > > 2. "proxy_bind" binds to an ipv6 address, the resolver returns both ipv4 and > ipv6 addresses, but the first attempted ip address is an ipv4 address (and > the other way around - binding to ipv4, first attempted is ipv6). > > Can you please shed some light on this? In either case nginx will call bind() syscall with the address provided in the proxy_bind directive. If address family doesn't match one used in the connection, this is expected to result in an error. The error itself will be logged into error log, and 500 (Internal Server Error) will be returned to the client. -- Maxim Dounin http://nginx.org/ From petros.fraser at gmail.com Mon Dec 29 18:55:50 2014 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 29 Dec 2014 13:55:50 -0500 Subject: How to setup Message-ID: Hi All I am very new to Nginx and am very interested in setting it up as a reverse proxy. Is it possible to have an nginx host behind a firewall with one network card that forwards requests to more than one IIS web server behind the firewall also, or do I need to make the nginx host inline with two nics, one one the public network and one on the private network? Regards Pedro -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 19:11:49 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 29 Dec 2014 14:11:49 -0500 Subject: How to setup In-Reply-To: References: Message-ID: Peter Fraser Wrote: ------------------------------------------------------- > proxy. Is it possible to have an nginx host behind a firewall with one > network card that forwards requests to more than one IIS web server > behind > the firewall also Yes. > , or do I need to make the nginx host inline with two > nics, one one the public network and one on the private network? Would be better but not necessary. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255837,255838#msg-255838 From petros.fraser at gmail.com Mon Dec 29 19:36:30 2014 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 29 Dec 2014 14:36:30 -0500 Subject: Use of Certs Message-ID: Hi All I am very new to nginx and am currently doing a lot of reading but would just love to have a nudge in the right direction I want to set up nginx as a reverse proxy for about three IIS servers behind a firewall. One of them is a public web server that handles secure logins. It is configured with a certificate signed by a CA. Do I need to import the web server's private key on to the nginx box or is this something I don't need to worry about? Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stl at wiredrive.com Mon Dec 29 19:46:42 2014 From: stl at wiredrive.com (Scott Larson) Date: Mon, 29 Dec 2014 11:46:42 -0800 Subject: Use of Certs In-Reply-To: References: Message-ID: If you're using nginx as a reverse proxy you'll want a cert set up on that node. Without it, worst case is your link between the proxy and the IIS server is secure but your link between the remote client and the proxy will be insecure defeating the whole purpose. Best case is an error will be thrown to the remote client either for a protocol mismatch or being unable to connect to 443 after a forced reconnection. At least in the latter case you wouldn't be leaking data over the wire. If you're using SSL between the proxy and IIS you don't need the IIS server certificate's private key. nginx just needs to be able to verify the certificate chain as legitimate. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Mon, Dec 29, 2014 at 11:36 AM, Peter Fraser wrote: > Hi All > I am very new to nginx and am currently doing a lot of reading but would > just love to have a nudge in the right direction > > I want to set up nginx as a reverse proxy for about three IIS servers > behind a firewall. > One of them is a public web server that handles secure logins. It is > configured with a certificate signed by a CA. Do I need to import the web > server's private key on to the nginx box or is this something I don't need > to worry about? > > Regards. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 19:49:35 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 29 Dec 2014 14:49:35 -0500 Subject: Use of Certs In-Reply-To: References: Message-ID: <5df5551ec6ddb1075df90b0b72f59dc5.NginxMailingListEnglish@forum.nginx.org> nginx will act as an endpoint for ssl so any cert needs to be at nginx's end. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255840,255842#msg-255842 From cesarshlbrn at hotmail.com Mon Dec 29 20:12:03 2014 From: cesarshlbrn at hotmail.com (Julio Cesar dos Santos) Date: Mon, 29 Dec 2014 18:12:03 -0200 Subject: nginx Digest, Vol 62, Issue 43 Message-ID: Gtx -mensag. original- Assunto: nginx Digest, Vol 62, Issue 43 De: nginx-request at nginx.org Data: 29/12/2014 14:48 Send nginx mailing list submissions to nginx at nginx.org To subscribe or unsubscribe via the World Wide Web, visit http://mailman.nginx.org/mailman/listinfo/nginx or, via email, send a message with subject or body 'help' to nginx-request at nginx.org You can reach the person managing the list at nginx-owner at nginx.org When replying, please edit your Subject line so it is more specific than "Re: Contents of nginx digest..." Today's Topics: 1. Re: Having multiple Symfony2 apps on same domain (B.R.) 2. Re: nginx removes double slashes (Maxim Dounin) 3. Header SSL client certificate (Equipe R&S Netplus) 4. Re: How Nginx behaves with "proxy_bind" and DNS resolver with non matching ip versions between bind ip and resolved ip? (Maxim Dounin) ---------------------------------------------------------------------- Message: 1 Date: Mon, 29 Dec 2014 16:47:51 +0100 From: "B.R." To: Nginx ML Subject: Re: Having multiple Symfony2 apps on same domain Message-ID: Content-Type: text/plain; charset="utf-8" Glad I helped ! :o) --- *B. R.* On Mon, Dec 29, 2014 at 3:52 PM, Naji Astier wrote: > Thanks you for your answer, you helped me to make it working. ;) > This is my final configuration : > > server { listen 80; > server_name mydomain; > root /server/www; location @rewriteMainApp { rewrite ^(.*)$ /app.php/$1 last; } location @rewriteOtherApp1 { rewrite ^(.*)$ /subdir1/app.php/$1 last; } location /subdir1 { alias /server/www/other-app1/web; index app.php; set $subfolder "other-app1/web"; try_files $uri @rewriteOtherApp1; } location / { root /server/www/main-app/web; index app.php; set $subfolder "main-app/web"; try_files $uri @rewriteMainApp; } # PROD location ~ /app\.php(/|$) { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/$subfolder/app.php; }} > > > > Le 29/12/2014 00:03, B.R. a ?crit : > > You are using the same named location as the fallback of the try_files > directive, although you are dealing with three different paths. > > Why do not you use one fallback named location per app location, each > rewriting to the correct path? > --- > *B. R.* > > On Sun, Dec 28, 2014 at 10:16 PM, Naji Astier < > naji.demolitionman at gmail.com> wrote: > >> Le 28/12/2014 20:30, Steve Holdoway a ?crit : >> >> On Sat, 2014-12-27 at 05:52 -0500, Claros wrote: >>> >>>> Hello everybody ! >>>> >>>> I just switched from Apache2 to Nginx and I met some issues having the >>>> same >>>> configuration. What I want to do is having multiple Symfony2 apps on the >>>> same domain name. Each app will have a subdirectory and a main app will >>>> be >>>> on the domain name itself. For instance : >>>> http://mydomain/ -> main app >>>> http://mydomain/subdir1 -> another app >>>> http://mydomain/subdir2 -> yet another app >>>> One of Symfony2 feature is to have only three php files to be executed, >>>> and >>>> all the URL are rewritten to those files. You can found basic >>>> configuration >>>> for Symfony2 at this address if you need more information : >>>> http://wiki.nginx.org/Symfony >>>> Now after many hours of configuration, with the help of debug logs, I >>>> almost >>>> did it. This is my current configuration : >>>> >>>> server { >>>> listen 80; >>>> server_name mydomain; >>>> root /server/www/main-app/web; >>>> >>>> location @rewriteapp { >>>> rewrite ^(.*)$ /app.php/$1 last; >>>> } >>>> >>>> location /subdir1/ { >>>> # alias /server/www/other-app1/web; >>>> set $root "/server/www/other-app1/web"; >>>> # try to serve file directly, fallback to app.php >>>> try_files $uri @rewriteapp; >>>> } >>>> >>>> location / { >>>> index app.php; >>>> set $root "/server/www/main-app/web"; >>>> # try to serve file directly, fallback to app.php >>>> try_files $uri @rewriteapp; >>>> } >>>> >>>> # PROD >>>> location ~ ^/app\.php(/|$) { >>>> fastcgi_pass unix:/var/run/php5-fpm.sock; >>>> fastcgi_split_path_info ^(.+\.php)(/.*)$; >>>> >>>> include fastcgi_params; >>>> fastcgi_param SCRIPT_FILENAME $root$fastcgi_script_name; >>>> fastcgi_param SCRIPT_NAME $fastcgi_script_name; >>>> fastcgi_param PATH_INFO $fastcgi_path_info; >>>> } >>>> } >>>> >>>> Why did I create a variable "$root" ? Because when I was using the root >>>> (or >>>> alias) directive in a location block and the variable $document_root, I >>>> found out that this variable has as final value (in the location >>>> app.php) >>>> the first root directive in the server or the default root location. >>>> With this configuration, it almost work. The main app works and the >>>> subdirectories are correctly sent to their directory. The last problem >>>> is >>>> that the URI processed by the file app.php also contains the >>>> subdirectory in >>>> it, so the others apps send 404 for all the URL. I tried to fix that by >>>> changing "REQUEST_URI" parameter, but with that the app.php generate >>>> wrong >>>> URL without the subdirectory. >>>> >>>> So is their a way to achieve this configuration ? Thanks you ! >>>> >>>> Posted at Nginx Forum: >>>> http://forum.nginx.org/read.php?2,255806,255806#msg-255806 >>>> >>> Try using a map to set the $root... >>> >>> Steve >>> >>> Ok I did not know the map system, it is interesting. But it is only >> simplifying my configuration, not solving the problem. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > _______________________________________________ > nginx mailing listnginx at nginx.orghttp://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 2 Date: Mon, 29 Dec 2014 19:00:28 +0300 From: Maxim Dounin To: nginx at nginx.org Subject: Re: nginx removes double slashes Message-ID: <20141229160028.GA3656 at mdounin.ru> Content-Type: text/plain; charset=us-ascii Hello! On Mon, Dec 29, 2014 at 01:04:06PM +0200, Gregory Edigarov wrote: > Hello everybody, > > perhaps I am doing something wrong: > > location /njs/ { > rewrite /njs/(.*)$ /$1 break; > proxy_redirect off; > proxy_set_header Host $http_host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_pass http://localhost:4006; > > } > calling this > http://[hostname]/facebook/bitly/http%3A%2F%2F[hostname]%2Fbeth-buczynski%2Fdiy-ways-to-stay-warm-in-winter%2F > > and I definitely see that to my application it comes like: > http:/[hostname]/beth-buczynski/diy-ways-to-stay-warm-in-winter/ > note the single '/', when I need '//' > > Is there any way to handle it? http://nginx.org/r/merge_slashes -- Maxim Dounin http://nginx.org/ ------------------------------ Message: 3 Date: Mon, 29 Dec 2014 17:26:51 +0100 From: "Equipe R&S Netplus" To: nginx at nginx.org Subject: Header SSL client certificate Message-ID: Content-Type: text/plain; charset="utf-8" Hello, I use nginx as a reverse-proxy. I would like to set a header, more precisely a header that contain the SSL client certificate. However, the variable '$ssl_client_cert' add some character that I don't want (like tab characters) << proxy_set_header X-SSL-CLI-CERT $ssl_client_cert; >> I test with '$ssl_client_raw_cert', but the webserver in backend (here apache) doesn't understand the certificate and return this : << request failed: error reading the headers >> I see a previous post mentionning a workarount with 'map' ( http://forum.nginx.org/read.php?2,236546,236546) : << map $ssl_client_raw_cert $a { "~^(-.*-\n)(?<1st>[^\n]+)\n((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?

[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?((?[^\n]+)\n)?(-.*-)$" $1st; } >> But in debug log file of nginx, I have an error : << [alert] 19820#0: *21 pcre_exec() failed: -8 on " ... CERTIFICATE CONTENT ... " using "^(-.*- )(?<1st>[^ ... >> I'm using nginx version 1.6.2, do you know another workaround please ? Thank you. -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Message: 4 Date: Mon, 29 Dec 2014 19:48:44 +0300 From: Maxim Dounin To: nginx at nginx.org Subject: Re: How Nginx behaves with "proxy_bind" and DNS resolver with non matching ip versions between bind ip and resolved ip? Message-ID: <20141229164844.GD3656 at mdounin.ru> Content-Type: text/plain; charset=us-ascii Hello! On Mon, Dec 29, 2014 at 02:36:55AM -0500, shmulik wrote: > Hello, > I'm working with the proxy module, and with a dns resolver configured. The > traffic i'm using is both ipv4 and ipv6. > > I'm trying to understand Nginx behavior when using "proxy_bind" directive > and when the resolver returns both ipv4 and ipv6 addresses. > > In particular i'd like to understand what happens when: > > 1. "proxy_bind" binds to an ipv6 address, and the resolver returns only ipv4 > addresses (and the other way around - binding to ipv4, resolving only to > ipv6). > > 2. "proxy_bind" binds to an ipv6 address, the resolver returns both ipv4 and > ipv6 addresses, but the first attempted ip address is an ipv4 address (and > the other way around - binding to ipv4, first attempted is ipv6). > > Can you please shed some light on this? In either case nginx will call bind() syscall with the address provided in the proxy_bind directive. If address family doesn't match one used in the connection, this is expected to result in an error. The error itself will be logged into error log, and 500 (Internal Server Error) will be returned to the client. -- Maxim Dounin http://nginx.org/ ------------------------------ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx End of nginx Digest, Vol 62, Issue 43 ************************************* From reallfqq-nginx at yahoo.fr Mon Dec 29 20:14:38 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 29 Dec 2014 21:14:38 +0100 Subject: Use of Certs In-Reply-To: <5df5551ec6ddb1075df90b0b72f59dc5.NginxMailingListEnglish@forum.nginx.org> References: <5df5551ec6ddb1075df90b0b72f59dc5.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, Dec 29, 2014 at 8:49 PM, itpp2012 wrote: > nginx will act as an endpoint for ssl so any cert needs to be at nginx's > end. > ?That assumption was not part of the initial statement, which was however saying that the backend server acted as the endpoint. You could then guess that nginx acted as a SSL proxy.? --- *B. R.* ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 20:20:40 2014 From: nginx-forum at nginx.us (stwissel) Date: Mon, 29 Dec 2014 15:20:40 -0500 Subject: HTTP HEAD timeout Message-ID: I'm using nginx 1.7.7 as a reverse proxy in front of a Apache CouchDB. Access via browser to CouchDB data works like a charm. However I have trouble with replication (which runs via HTTPs). This is what I found out: CouchDB would issue a HTTP HEAD first and then perform GET/POST as per its algorythm. However the HEAD request times out. I then tried to replicate that behavior using CURL. This is what I found: curl -v --head http://myserver/couch - works as expected curl -v -X HEAD http://myserver/couch - times out. Now I suspect that CouchDB uses a call similar to the later and thus runs into the timeout. I verified: the timeout also happens when I do a -X HEAD to a base address (one that is not redirected to CouchDB), so I need to change something (can I?) on the nginx side. What are my options? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255845,255845#msg-255845 From me at myconan.net Mon Dec 29 20:35:38 2014 From: me at myconan.net (Edho Arief) Date: Tue, 30 Dec 2014 05:35:38 +0900 Subject: HTTP HEAD timeout In-Reply-To: References: Message-ID: not sure about your original problem but `curl -X HEAD` isn't a proper http request: ``` This option only changes the actual word used in the HTTP request, it does not alter the way curl behaves. So for example if you want to make a proper HEAD request, using -X HEAD will not suffice. You need to use the -I, --head option. ``` On Dec 30, 2014 5:20 AM, "stwissel" wrote: > I'm using nginx 1.7.7 as a reverse proxy in front of a Apache CouchDB. > Access via browser to CouchDB data works like a charm. However I have > trouble with replication (which runs via HTTPs). This is what I found out: > > CouchDB would issue a HTTP HEAD first and then perform GET/POST as per its > algorythm. However the HEAD request times out. I then tried to replicate > that behavior using CURL. This is what I found: > > curl -v --head http://myserver/couch > > - works as expected > > curl -v -X HEAD http://myserver/couch > > - times out. Now I suspect that CouchDB uses a call similar to the later > and > thus runs into the timeout. > > I verified: the timeout also happens when I do a -X HEAD to a base address > (one that is not redirected to CouchDB), so I need to change something (can > I?) on the nginx side. > > What are my options? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255845,255845#msg-255845 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 20:52:05 2014 From: nginx-forum at nginx.us (erankor2) Date: Mon, 29 Dec 2014 15:52:05 -0500 Subject: Serving files from a slow NFS storage Message-ID: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> Hi all, In our production environment, we have several nginx servers running on Ubuntu that serve files from a very large (several PBs) NFS mounted storage. Usually the storage responds pretty fast, but occasionally it can be very slow. Since we are using aio, when a file read operation runs slow, it's not that bad - the specific HTTP request will just take a longer time to complete. However, we have seen cases in which the file open itself can take a long time to complete. Since the open is synchronous, when the open is slow all active requests on the same nginx worker process are delayed. One way to mitigate the problem may be to increase the number of nginx workers, to some number well above the number of CPU cores. This will make each worker handle less requests and therefore less requests will be delayed due to a slow open, but this solution sounds far from ideal. Another possibility (that requires some development) may be to create a thread that will perform the task of opening the file. The main thread will wait on the completion of the open-thread asynchronously, and will be available to handle other requests until the open completes. The child thread can either be created per request (many requests probably won't even need to open a file, thanks to the caching of open file handles), or alternatively some fancier thread pooling mechanism could be developed. I'd love to hear any thoughts / ideas on this subject Thanks, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255847,255847#msg-255847 From petros.fraser at gmail.com Mon Dec 29 21:06:59 2014 From: petros.fraser at gmail.com (Peter Fraser) Date: Mon, 29 Dec 2014 16:06:59 -0500 Subject: proper directive to pass requests Message-ID: Hi All I'm building my configuration slowly. Thanks for all the help so far. My current obstacle is this: As it is now, external users will access an internal IIS web server by using http://my.domain.com. The firewall points to the web server and the web server automatically redirects to https://my.domain.com. I'm trying to fit nginx between the firewall and the web server and figure out how to configure nginx to respond to requests for http://my.domain.com and proxy that to the web server. What do I use with the proxy_pass directive to get this to work? Would "proxy_pass http://my.domain.com; work" ? Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 21:17:24 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 29 Dec 2014 16:17:24 -0500 Subject: Serving files from a slow NFS storage In-Reply-To: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> References: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <377bb82f2dc046ff3cec709523a836e8.NginxMailingListEnglish@forum.nginx.org> http://www.debianhelp.co.uk/nfs.htm https://blog.yrden.de/2013/10/13/setting-up-nfs-cache-on-debian.html Or setup a debian VM which mounts your storage and where nginx connects to. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255847,255850#msg-255850 From nginx-forum at nginx.us Mon Dec 29 21:23:29 2014 From: nginx-forum at nginx.us (stwissel) Date: Mon, 29 Dec 2014 16:23:29 -0500 Subject: HTTP HEAD timeout In-Reply-To: References: Message-ID: <0b89abf50f88485c31ee1b8518785328.NginxMailingListEnglish@forum.nginx.org> Hi Edho, I know -X HEAD is a hack, but it seems that is the way CouchDB might operate (haven't got a reply back from them). Worked before using an Apache HTTP reverse-proxy, but I like nginx much better. So what options do I have to make nginx behave the same for -X HEAD as it does for --head? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255845,255852#msg-255852 From stl at wiredrive.com Mon Dec 29 21:35:39 2014 From: stl at wiredrive.com (Scott Larson) Date: Mon, 29 Dec 2014 13:35:39 -0800 Subject: Serving files from a slow NFS storage In-Reply-To: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> References: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: Without knowing everything in the mix my first thought would be the NFS head node is being tapped out and can't keep up. Generally you'd solve this with some type of caching, either at a CDN level or you could look at the SlowFS module. I've not checked to see if it still compiles against the current releases but if you're dealing with short-life hot data or a consistent group of commonly accessed files, either solution would make a significant impact in reducing NFS load without having to resort to other potentially dodgy solutions. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Mon, Dec 29, 2014 at 12:52 PM, erankor2 wrote: > Hi all, > > In our production environment, we have several nginx servers running on > Ubuntu that serve files from a very large (several PBs) NFS mounted > storage. > > Usually the storage responds pretty fast, but occasionally it can be very > slow. Since we are using aio, when a file read operation runs slow, it's > not > that bad - the specific HTTP request will just take a longer time to > complete. > However, we have seen cases in which the file open itself can take a long > time to complete. Since the open is synchronous, when the open is slow all > active requests on the same nginx worker process are delayed. > One way to mitigate the problem may be to increase the number of nginx > workers, to some number well above the number of CPU cores. This will make > each worker handle less requests and therefore less requests will be > delayed > due to a slow open, but this solution sounds far from ideal. > Another possibility (that requires some development) may be to create a > thread that will perform the task of opening the file. The main thread will > wait on the completion of the open-thread asynchronously, and will be > available to handle other requests until the open completes. The child > thread can either be created per request (many requests probably won't even > need to open a file, thanks to the caching of open file handles), or > alternatively some fancier thread pooling mechanism could be developed. > > I'd love to hear any thoughts / ideas on this subject > > Thanks, > > Eran > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255847,255847#msg-255847 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Dec 29 22:13:21 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Mon, 29 Dec 2014 17:13:21 -0500 Subject: Exclude ip's from Nginx limit_req zone In-Reply-To: <23f9261997f37352cc0152ed1cfd3bd6.NginxMailingListEnglish@forum.nginx.org> References: <20141225131251.GL79300@mdounin.ru> <23f9261997f37352cc0152ed1cfd3bd6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Anyone please? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255697,255857#msg-255857 From steve at greengecko.co.nz Mon Dec 29 23:53:48 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Tue, 30 Dec 2014 12:53:48 +1300 Subject: Serving files from a slow NFS storage In-Reply-To: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> References: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1419897228.22960.28.camel@steve-new> On Mon, 2014-12-29 at 15:52 -0500, erankor2 wrote: > Hi all, > > In our production environment, we have several nginx servers running on > Ubuntu that serve files from a very large (several PBs) NFS mounted storage. > > Usually the storage responds pretty fast, but occasionally it can be very > slow. Since we are using aio, when a file read operation runs slow, it's not > that bad - the specific HTTP request will just take a longer time to > complete. > However, we have seen cases in which the file open itself can take a long > time to complete. Since the open is synchronous, when the open is slow all > active requests on the same nginx worker process are delayed. > One way to mitigate the problem may be to increase the number of nginx > workers, to some number well above the number of CPU cores. This will make > each worker handle less requests and therefore less requests will be delayed > due to a slow open, but this solution sounds far from ideal. > Another possibility (that requires some development) may be to create a > thread that will perform the task of opening the file. The main thread will > wait on the completion of the open-thread asynchronously, and will be > available to handle other requests until the open completes. The child > thread can either be created per request (many requests probably won't even > need to open a file, thanks to the caching of open file handles), or > alternatively some fancier thread pooling mechanism could be developed. > > I'd love to hear any thoughts / ideas on this subject > > Thanks, > > Eran As a generic SysAdmin, I would say the first place to start is to look into installing something like cachefs, which will keep local copies of the remote files, so once filled, the problem should go away. There are also options to tune kernel and mount options that can help a bit. I would/do use this approach in preference to the (still too new IMO) alternatives like GlusterFS. In my experience, serving any volume of files over NFS will always be a bottleneck. Cheers, Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Tue Dec 30 03:12:57 2014 From: nginx-forum at nginx.us (stwissel) Date: Mon, 29 Dec 2014 22:12:57 -0500 Subject: Workaround In-Reply-To: References: Message-ID: <44a44c9f7716e5f0b30870890bf7df97.NginxMailingListEnglish@forum.nginx.org> This workaround seems to do the trick for the time being: if ($request_method = HEAD) { add_header Content-Length 0; add_header Content-Type text/plain; add_header Vary Accept-Encoding; return 200; } It might interfere with other HEAD requests, so I consider that as stopgap measure Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255845,255865#msg-255865 From nginx-forum at nginx.us Tue Dec 30 04:11:54 2014 From: nginx-forum at nginx.us (xdiaod) Date: Mon, 29 Dec 2014 23:11:54 -0500 Subject: http module handler, chain buffer and output_filter Message-ID: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> Hey, i wonder why the server is freezing when i do a request to it and i do not define the "NO_PROBLEM" macro in the code. In my ngx_html_chain_buffers_init i do a ngx_pcalloc because i thought that the server was freezing because of some memory missing alignment(it was firstly static). I am just too tired at this hour for change it(ill change it tommorow). I am on a debian jessie, nginx 1.6.2 (source from the debian repo if i remember well) PS: I am sorry if i miss some forum rules. It is my first time here. Just tell me what i did wrong if i did something wrong. Thank You ! here is my code: //#define NO_PROBLEM #include #include #include #include /* BEGIN html strings */ enum ngx_html_text_e { NGX_HTML_ALL, NGX_HTML_NB_PARTS }; static ngx_str_t ngx_html_strings[NGX_HTML_NB_PARTS] = { ngx_string( "\n" "\n" "\t\n" "\t\n" "\t\n" "\t\n" "" ) }; /* END html strings */ static ngx_buf_t * ngx_html_buffers; static ngx_chain_t ngx_html_chain_buffers[NGX_HTML_NB_PARTS]; static ngx_pool_t * ngx_html_buffers_pool; static char * ngx_html_chain_buffers_init(ngx_log_t *log){ ngx_int_t i; ngx_html_buffers_pool = ngx_create_pool(NGX_HTML_NB_PARTS * sizeof(ngx_buf_t) + sizeof(ngx_pool_t), log); //TODO TMP ALLOC ngx_html_buffers = ngx_pcalloc(ngx_html_buffers_pool, NGX_HTML_NB_PARTS * sizeof(ngx_buf_t)); if(ngx_html_buffers == NULL){ return NGX_CONF_ERROR; } for(i = 0;i < NGX_HTML_NB_PARTS;i++){ ngx_html_buffers[i].pos = ngx_html_strings[i].data; ngx_html_buffers[i].last = ngx_html_strings[i].data + ngx_html_strings[i].len; ngx_html_buffers[i].file_pos = 0; ngx_html_buffers[i].file_last = 0; ngx_html_buffers[i].start = NULL; ngx_html_buffers[i].end = NULL; ngx_html_buffers[i].tag = NULL; ngx_html_buffers[i].file = NULL; ngx_html_buffers[i].shadow = NULL; ngx_html_buffers[i].temporary = 0; ngx_html_buffers[i].memory = 1; ngx_html_buffers[i].mmap = 0; ngx_html_buffers[i].recycled = 0; ngx_html_buffers[i].in_file = 0; ngx_html_buffers[i].flush = 0; ngx_html_buffers[i].sync = 0; ngx_html_buffers[i].last_buf = 0; ngx_html_buffers[i].last_in_chain = 0; ngx_html_buffers[i].last_shadow = 0; ngx_html_buffers[i].temp_file = 0; ngx_html_buffers[i].num = 0; ngx_html_chain_buffers[i].buf = &ngx_html_buffers[i]; } ngx_html_buffers[i].last_buf = 1; ngx_html_buffers[i].last_in_chain = 1; return NGX_CONF_OK; } static char * ngx_http_diceroll_quickcrab_com(ngx_conf_t *cf, ngx_command_t *cmd, void *conf); static ngx_command_t ngx_http_diceroll_quickcrab_com_commands[] = { { ngx_string("diceroll_quickcrab_com"), NGX_HTTP_LOC_CONF|NGX_CONF_NOARGS, ngx_http_diceroll_quickcrab_com, 0, 0, NULL }, ngx_null_command }; /* * The module context has hooks , here we have a hook for creating * location configuration */ static ngx_http_module_t ngx_http_diceroll_quickcrab_com_module_ctx = { NULL, /* preconfiguration */ NULL, /* postconfiguration */ NULL, /* create main configuration */ NULL, /* init main configuration */ NULL, /* create server configuration */ NULL, /* merge server configuration */ NULL, /* create location configuration */ NULL /* merge location configuration */ }; /* * The module which binds the context and commands */ ngx_module_t ngx_http_diceroll_quickcrab_com_module = { NGX_MODULE_V1, &ngx_http_diceroll_quickcrab_com_module_ctx, /* module context */ ngx_http_diceroll_quickcrab_com_commands, /* module directives */ NGX_HTTP_MODULE, /* module type */ NULL, /* init master */ NULL, /* init module */ NULL, /* init process */ NULL, /* init thread */ NULL, /* exit thread */ NULL, /* exit process */ NULL, /* exit master */ NGX_MODULE_V1_PADDING }; /* * Main handler function of the module. */ static ngx_int_t ngx_http_diceroll_quickcrab_com_handler(ngx_http_request_t *r){ ngx_int_t rc; size_t content_length_n; ngx_chain_t * out; content_length_n = 0; out = &ngx_html_chain_buffers[NGX_HTML_ALL]; out->next = NULL; #ifdef NO_PROBLEM out->buf = ngx_pcalloc(r->pool,sizeof(ngx_buf_t)); out->buf->pos = ngx_html_strings[NGX_HTML_ALL].data; out->buf->last = ngx_html_strings[NGX_HTML_ALL].data + ngx_html_strings[NGX_HTML_ALL].len; out->buf->memory = 1; out->buf->last_buf = 1; #endif content_length_n += ngx_html_strings[NGX_HTML_ALL].len; /* we response to 'GET' and 'HEAD' requests only */ if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { return NGX_HTTP_NOT_ALLOWED; } /* discard request body, since we don't need it here */ rc = ngx_http_discard_request_body(r); if (rc != NGX_OK) { return rc; } /* set the 'Content-type' header */ r->headers_out.content_type_len = sizeof("text/html") - 1; r->headers_out.content_type.data = (u_char *) "text/html"; /* send the header only, if the request type is http 'HEAD' */ if (r->method == NGX_HTTP_HEAD) { r->headers_out.status = NGX_HTTP_OK; r->headers_out.content_length_n = content_length_n; return ngx_http_send_header(r); } /* set the status line */ r->headers_out.status = NGX_HTTP_OK; r->headers_out.content_length_n = content_length_n; /* send the headers of your response */ rc = ngx_http_send_header(r); if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { return rc; } ngx_log_debug0(NGX_LOG_DEBUG_HTTP, r->connection->log, 0, "------------------------------------------------"); /* send the buffer chain of your response */ rc = ngx_http_output_filter(r, out); return rc; } /* * Function for the directive diceroll_quickcrab_com , it validates its value * and copies it to a static variable to be printed later */ static char * ngx_http_diceroll_quickcrab_com(ngx_conf_t *cf, ngx_command_t *cmd, void *conf){ char * rc; ngx_http_core_loc_conf_t *clcf; static unsigned already_done = 0; if(!already_done) { rc = ngx_html_chain_buffers_init(cf->log); if (rc != NGX_CONF_OK) return rc; already_done = 1; } clcf = ngx_http_conf_get_module_loc_conf(cf, ngx_http_core_module); clcf->handler = ngx_http_diceroll_quickcrab_com_handler; return NGX_CONF_OK; } Here is my nginx.conf: worker_processes 1; error_log logs/debug.log debug; events { worker_connections 4; } http { server_names_hash_max_size 4; server_names_hash_bucket_size 4; server { listen 127.0.0.1:80 default_server; server_name _; return 444; } server { listen 127.0.0.1:80; server_name localhost 127.0.0.1; root /var/www/diceroll.quickcrab.com; location / { diceroll_quickcrab_com; } } } Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255866,255866#msg-255866 From nginx-forum at nginx.us Tue Dec 30 04:18:48 2014 From: nginx-forum at nginx.us (xdiaod) Date: Mon, 29 Dec 2014 23:18:48 -0500 Subject: http module handler, chain buffer and output_filter In-Reply-To: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> References: <017735e3e59afbe191ef0b619169fcb7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <38a06267629beb5d2459f040a239d656.NginxMailingListEnglish@forum.nginx.org> Does something is trying to free the buffers directly before freeing the pool in the output filter? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255866,255867#msg-255867 From nginx-forum at nginx.us Tue Dec 30 07:34:45 2014 From: nginx-forum at nginx.us (erankor2) Date: Tue, 30 Dec 2014 02:34:45 -0500 Subject: Serving files from a slow NFS storage In-Reply-To: <1419897228.22960.28.camel@steve-new> References: <1419897228.22960.28.camel@steve-new> Message-ID: Thank you all for your replies. Since all 3 replies suggest some form of caching I'll respond to them together here - The nginx servers that I mentioned in my post do not serve client requests directly, the clients always hit the CDN first (we use mostly Akamai), and the CDN then pulls from these nginx servers. In other words, these servers act as the CDN origin. Therefore, hot / popular content is already taken care of - I have no problem there. Since the files we serve are large (video) the CDN isn't caching them for too long (we send caching header of 3 months, and the files usually get cached for a couple of days), so the servers are getting quite a few requests, and these requests hardly repeat themselves. Each server is delivering roughly 1/2TB of data per day, so to get any hits on an NFS cache we'll probably need a very large cache. And even if do that, we'll still have this problem with the non-popular content (e.g. videos that are watched on average once a week) - such a request may hang the process if opening the file takes a long time. Thanks, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255847,255868#msg-255868 From edward at ehibbert.org.uk Tue Dec 30 09:44:17 2014 From: edward at ehibbert.org.uk (Edward Hibbert) Date: Tue, 30 Dec 2014 09:44:17 +0000 Subject: Setting the SSL protocol used on proxy_pass? Message-ID: I am trying to set up a reverse proxy which handles SSL. This is my first time, so I may be doing something stupid. On the NGINX which is acting as a proxy I get this: SSL_do_handshake() failed (SSL: error:140770FC:SSL routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to upstream, On the NGINX which is upstream I am configured to only accept TLS, because of recent SSL security problems. ssl_protocols TLSv1.2 TLSv1.1 TLSv1; I would guess that the problem here is that NGINX is opening the proxy connection using the wrong SSL protocol. Is there a way to control which protocol it uses for the proxy connection? Thanks for any help, Edward -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 30 11:58:51 2014 From: nginx-forum at nginx.us (shmulik) Date: Tue, 30 Dec 2014 06:58:51 -0500 Subject: How Nginx behaves with "proxy_bind" and DNS resolver with non matching ip versions between bind ip and resolved ip? In-Reply-To: <20141229164844.GD3656@mdounin.ru> References: <20141229164844.GD3656@mdounin.ru> Message-ID: <5aded18c889ccb93e16281f6a1b259d7.NginxMailingListEnglish@forum.nginx.org> Thank you. So if i understood correctly: When i bind an ipv6 address, and the resolver returns 1 ipv4 address and 1 ipv6 address - if the first attempted address is the ipv4 address, the result will be an error + sending back to the client a "500 Internal Server Error"? In such scenarios, is there any way i can tell Nginx to skip the non matching ip version? (i.e. in the above example, to skip directly to the resolved ipv6 address). Thanks, Shmulik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255814,255873#msg-255873 From nginx-forum at nginx.us Tue Dec 30 13:17:27 2014 From: nginx-forum at nginx.us (hpatoio) Date: Tue, 30 Dec 2014 08:17:27 -0500 Subject: How to write nginx, NGINX or Nginx ? Message-ID: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> Hello. I'm writing some documentation for a project that use NGINX. I'm wondering what's the correct way to write nginx. a) NGINX - Always all uppercase b) nginx - Always all lowercase. Even at the beginning of a sentence c) Nginx - Always capitalized Is there an official way ? Thanks -- Simone Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255874,255874#msg-255874 From rainer at ultra-secure.de Tue Dec 30 14:22:35 2014 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 30 Dec 2014 15:22:35 +0100 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> References: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Am 30.12.2014 um 14:17 schrieb hpatoio >: > > Hello. I'm writing some documentation for a project that use NGINX. I'm > wondering what's the correct way to write nginx. > > a) NGINX - Always all uppercase > b) nginx - Always all lowercase. Even at the beginning of a sentence > c) Nginx - Always capitalized > > Is there an official way ? > > Thanks I wondered the same. Hey, almost to the day two years ago: http://forum.nginx.org/read.php?2,234083,234083#msg-234083 Rainer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Dec 30 14:39:13 2014 From: nginx-forum at nginx.us (hpatoio) Date: Tue, 30 Dec 2014 09:39:13 -0500 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: References: Message-ID: <6c9b99263a82181a5840f3943f9af37d.NginxMailingListEnglish@forum.nginx.org> Good. But that questions doesn't clarify. Event if this post make sense: http://forum.nginx.org/read.php?2,234083,234085#msg-234085 -- Simone Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255874,255878#msg-255878 From patrick at nginx.com Tue Dec 30 14:43:27 2014 From: patrick at nginx.com (Patrick Nommensen) Date: Tue, 30 Dec 2014 06:43:27 -0800 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> References: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> Hi Simone, When it's about the company [1] use ?Nginx" and when it?s about the software use ?NGINX". We?ll look to resolve present inconsistencies. [1] http://nginx.com/company/ -- Patrick Nommensen http://nginx.com > On Dec 30, 2014, at 5:17 AM, hpatoio wrote: > > Hello. I'm writing some documentation for a project that use NGINX. I'm > wondering what's the correct way to write nginx. > > a) NGINX - Always all uppercase > b) nginx - Always all lowercase. Even at the beginning of a sentence > c) Nginx - Always capitalized > > Is there an official way ? > > Thanks > > -- > Simone > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255874,255874#msg-255874 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jim at ohlste.in Tue Dec 30 15:01:25 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 30 Dec 2014 10:01:25 -0500 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> References: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> Message-ID: <54A2BE45.2020708@ohlste.in> Hello, On 12/30/14 9:43 AM, Patrick Nommensen wrote: > Hi Simone, > > When it's about the company [1] use ?Nginx" and when it?s about the software use ?NGINX". > > We?ll look to resolve present inconsistencies. > > [1] http://nginx.com/company/ > While I think this is much ado about nothing, I've been using nginx since the 0.6.x line, and subscribing to this list since then as well. Igor has (or had) *always* spelled the software "nginx" [0]. This was the case when he was active on this list as well. [0] http://sysoev.ru/en/ -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From jim at ohlste.in Tue Dec 30 15:04:43 2014 From: jim at ohlste.in (Jim Ohlstein) Date: Tue, 30 Dec 2014 10:04:43 -0500 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> References: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> Message-ID: <54A2BF0B.5060606@ohlste.in> Hello, On 12/30/14 9:43 AM, Patrick Nommensen wrote: > Hi Simone, > > When it's about the company [1] use ?Nginx" and when it?s about the software use ?NGINX". > > We?ll look to resolve present inconsistencies. > > [1] http://nginx.com/company/ > See also http://nginx.org/ -- Jim Ohlstein "Never argue with a fool, onlookers may not be able to tell the difference." - Mark Twain From lvargas at mifuturofinanciero.com Tue Dec 30 15:33:09 2014 From: lvargas at mifuturofinanciero.com (Luis Vargas) Date: Tue, 30 Dec 2014 12:33:09 -0300 Subject: SSL Message-ID: I have two IIS servers with SSL certificate, and a load balancer with NGINX. I need to know if i must install a SSL certificate in the NGINX machine to? Luis Vargas Catal?n SEGURIDAD Y REDES mifuturofinanciero.com Navarra 3720, Las Condes, SCL Tel: *+56 2 22282884* -------------- next part -------------- An HTML attachment was scrubbed... URL: From edigarov at qarea.com Tue Dec 30 17:49:36 2014 From: edigarov at qarea.com (Gregory Edigarov) Date: Tue, 30 Dec 2014 19:49:36 +0200 Subject: nginx removes double slashes In-Reply-To: <20141229160028.GA3656@mdounin.ru> References: <54A13526.2050604@qarea.com> <20141229160028.GA3656@mdounin.ru> Message-ID: <54A2E5B0.1070702@qarea.com> On 12/29/2014 06:00 PM, Maxim Dounin wrote: > Hello! > > On Mon, Dec 29, 2014 at 01:04:06PM +0200, Gregory Edigarov wrote: > >> Hello everybody, >> >> perhaps I am doing something wrong: >> >> location /njs/ { >> rewrite /njs/(.*)$ /$1 break; >> proxy_redirect off; >> proxy_set_header Host $http_host; >> proxy_set_header X-Real-IP $remote_addr; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header X-Forwarded-Proto $scheme; >> proxy_pass http://localhost:4006; >> >> } >> calling this >> http://[hostname]/facebook/bitly/http%3A%2F%2F[hostname]%2Fbeth-buczynski%2Fdiy-ways-to-stay-warm-in-winter%2F >> >> and I definitely see that to my application it comes like: >> http:/[hostname]/beth-buczynski/diy-ways-to-stay-warm-in-winter/ >> note the single '/', when I need '//' >> >> Is there any way to handle it? > http://nginx.org/r/merge_slashes Thank you. Just an unlucky name for the option ))) I thought it's "preserve_slashes" and didn't find it. From reallfqq-nginx at yahoo.fr Tue Dec 30 18:53:28 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 30 Dec 2014 19:53:28 +0100 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: <54A2BF0B.5060606@ohlste.in> References: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> <54A2BF0B.5060606@ohlste.in> Message-ID: It seems the original and preferred way to spell it is 'nginx', the one cming from Igor. I am still wondering about capitalizing the name, but since it is to me a personal name, I do not apply rules that would normally affect common names. Thus, IMHO, I would use 'nginx' wherever it is used, with no capital whatsoever. I saw some nginx company-related stuff spelled NGINX, but that is ugly and almost always marketing-related resources. Never trust sales(wo)men to best know the product they sale. ;o) --- *B. R.* On Tue, Dec 30, 2014 at 4:04 PM, Jim Ohlstein wrote: > Hello, > > On 12/30/14 9:43 AM, Patrick Nommensen wrote: > >> Hi Simone, >> >> When it's about the company [1] use ?Nginx" and when it?s about the >> software use ?NGINX". >> >> We?ll look to resolve present inconsistencies. >> >> [1] http://nginx.com/company/ >> >> > See also http://nginx.org/ > > -- > Jim Ohlstein > > > "Never argue with a fool, onlookers may not be able to tell the > difference." - Mark Twain > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Tue Dec 30 19:10:38 2014 From: rainer at ultra-secure.de (Rainer Duffner) Date: Tue, 30 Dec 2014 20:10:38 +0100 Subject: How to write nginx, NGINX or Nginx ? In-Reply-To: References: <9697761116b9b4bbacf0d23ea3134fc2.NginxMailingListEnglish@forum.nginx.org> <06499756-B897-4591-8F21-B58840D6FE07@nginx.com> <54A2BF0B.5060606@ohlste.in> Message-ID: <93AABDB9-2944-484A-BF52-852441CB7BE3@ultra-secure.de> > Am 30.12.2014 um 19:53 schrieb B.R. >: > > It seems the original and preferred way to spell it is 'nginx', the one cming from Igor. I am still wondering about capitalizing the name, but since it is to me a personal name, I do not apply rules that would normally affect common names. > Thus, IMHO, I would use 'nginx' wherever it is used, with no capital whatsoever. > > I saw some nginx company-related stuff spelled NGINX, but that is ugly and almost always marketing-related resources. Never trust sales(wo)men to best know the product they sale. ;o) It probably also has to do with the transcription of the cyrillic letters to latin letters. Also, there seem to be different ways of ?capitalization? in American English and Russian, if you look around the web a bit. (My own knowledge of Russian is best described as ?extremely limited, bordering the nonexistent?). I?m actually glad that Igor and his crew thought about the important things first and didn?t waste time nor money paying a consultant to come up with a ?cool? name (and the accompanying dot-io domain?) BTW: not sure if this has been posted, but in a recent marketing-email, I was alerted to this very informative timeline of nginx development: http://nginx.com/wp-content/uploads/2014/11/Infographic_History-of-Nginx_FulI_20141101.png -------------- next part -------------- An HTML attachment was scrubbed... URL: From petros.fraser at gmail.com Tue Dec 30 19:34:26 2014 From: petros.fraser at gmail.com (Peter Fraser) Date: Tue, 30 Dec 2014 14:34:26 -0500 Subject: Error: This server's certificate chain is incomplete. Message-ID: Hi All I managed to get the nginx reverse proxy up and forwarding to my https web server. I think I have missed something though as a user just let me know that when he tried to access the site he gets a message that the certificate is invalid. I just did a test with ssllabs and noticed that it shows this error: "This server's certificate chain is incomplete. " Any ideas on what I have missed? Thanks for the assistance. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stl at wiredrive.com Tue Dec 30 19:38:24 2014 From: stl at wiredrive.com (Scott Larson) Date: Tue, 30 Dec 2014 11:38:24 -0800 Subject: Error: This server's certificate chain is incomplete. In-Reply-To: References: Message-ID: That test should point you in some direction but you're probably missing an intermediate certificate which would normally be provided by the issuer and appended to the file containing your server certificate. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Tue, Dec 30, 2014 at 11:34 AM, Peter Fraser wrote: > Hi All > I managed to get the nginx reverse proxy up and forwarding to my https web > server. > I think I have missed something though as a user just let me know that > when he tried to access the site he gets a message that the certificate is > invalid. > > I just did a test with ssllabs and noticed that it shows this error: "This > server's certificate chain is incomplete. " > > Any ideas on what I have missed? Thanks for the assistance. > > Regards. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Tue Dec 30 20:01:38 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 31 Dec 2014 09:01:38 +1300 Subject: Serving files from a slow NFS storage In-Reply-To: References: <1419897228.22960.28.camel@steve-new> Message-ID: <1419969698.22960.39.camel@steve-new> On Tue, 2014-12-30 at 02:34 -0500, erankor2 wrote: > Thank you all for your replies. > > Since all 3 replies suggest some form of caching I'll respond to them > together here - > The nginx servers that I mentioned in my post do not serve client requests > directly, the clients always hit the CDN first (we use mostly Akamai), and > the CDN then pulls from these nginx servers. In other words, these servers > act as the CDN origin. Therefore, hot / popular content is already taken > care of - I have no problem there. > Since the files we serve are large (video) the CDN isn't caching them for > too long (we send caching header of 3 months, and the files usually get > cached for a couple of days), so the servers are getting quite a few > requests, and these requests hardly repeat themselves. Each server is > delivering roughly 1/2TB of data per day, so to get any hits on an NFS cache > we'll probably need a very large cache. And even if do that, we'll still > have this problem with the non-popular content (e.g. videos that are watched > on average once a week) - such a request may hang the process if opening the > file takes a long time. > > Thanks, > > Eran I'm a bit confused here. Are you saying that the CDN is pulling from NFS? If so, then surely the solution is under your control... deliver all this content from a single server. If the web servers never deliver it, then mount this content via NFS on them so they know it exists, but no more. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From mdounin at mdounin.ru Tue Dec 30 20:35:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 30 Dec 2014 23:35:02 +0300 Subject: Error: This server's certificate chain is incomplete. In-Reply-To: References: Message-ID: <20141230203502.GB37213@mdounin.ru> Hello! On Tue, Dec 30, 2014 at 02:34:26PM -0500, Peter Fraser wrote: > Hi All > I managed to get the nginx reverse proxy up and forwarding to my https web > server. > I think I have missed something though as a user just let me know that when > he tried to access the site he gets a message that the certificate is > invalid. > > I just did a test with ssllabs and noticed that it shows this error: "This > server's certificate chain is incomplete. " > > Any ideas on what I have missed? Thanks for the assistance. http://nginx.org/en/docs/http/configuring_https_servers.html#chains -- Maxim Dounin http://nginx.org/ From petros.fraser at gmail.com Tue Dec 30 21:18:03 2014 From: petros.fraser at gmail.com (Peter Fraser) Date: Tue, 30 Dec 2014 16:18:03 -0500 Subject: Error: This server's certificate chain is incomplete. In-Reply-To: References: Message-ID: Thanks for that. I found out also that I need to export all the intermediate certs. I used this command below to export them all to a text file. openssl pkcs12 -in .pfx -out outputfile.txt -nodes. Then I manually removed the private key and used the result as the cert. It works now except that the same test result says "Contains Anchor" This is not really a problem from what I have read but I will spend a little time trying to figure how to correct this also. On Tue, Dec 30, 2014 at 2:38 PM, Scott Larson wrote: > That test should point you in some direction but you're probably > missing an intermediate certificate which would normally be provided by the > issuer and appended to the file containing your server certificate. > > > > *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 > 8238 ext. 1106310 943 2078 faxwww.wiredrive.com > www.twitter.com/wiredrive > www.facebook.com/wiredrive > * > > On Tue, Dec 30, 2014 at 11:34 AM, Peter Fraser > wrote: > >> Hi All >> I managed to get the nginx reverse proxy up and forwarding to my https >> web server. >> I think I have missed something though as a user just let me know that >> when he tried to access the site he gets a message that the certificate is >> invalid. >> >> I just did a test with ssllabs and noticed that it shows this error: >> "This server's certificate chain is incomplete. " >> >> Any ideas on what I have missed? Thanks for the assistance. >> >> Regards. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From stl at wiredrive.com Tue Dec 30 21:28:12 2014 From: stl at wiredrive.com (Scott Larson) Date: Tue, 30 Dec 2014 13:28:12 -0800 Subject: Error: This server's certificate chain is incomplete. In-Reply-To: References: Message-ID: Contains anchor means that you're sending the CA's root cert along with the intermediates and your own, and that it's generally unnecessary to do so. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Tue, Dec 30, 2014 at 1:18 PM, Peter Fraser wrote: > Thanks for that. I found out also that I need to export all the > intermediate certs. I used this command below to export them all to a text > file. > openssl pkcs12 -in .pfx -out outputfile.txt -nodes. > > Then I manually removed the private key and used the result as the cert. > It works now except that the same test result says "Contains Anchor" This > is not really a problem from what I have read but I will spend a little > time trying to figure how to correct this also. > > On Tue, Dec 30, 2014 at 2:38 PM, Scott Larson wrote: > >> That test should point you in some direction but you're probably >> missing an intermediate certificate which would normally be provided by the >> issuer and appended to the file containing your server certificate. >> >> >> >> *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 >> 8238 ext. 1106310 943 2078 faxwww.wiredrive.com >> www.twitter.com/wiredrive >> www.facebook.com/wiredrive >> * >> >> On Tue, Dec 30, 2014 at 11:34 AM, Peter Fraser >> wrote: >> >>> Hi All >>> I managed to get the nginx reverse proxy up and forwarding to my https >>> web server. >>> I think I have missed something though as a user just let me know that >>> when he tried to access the site he gets a message that the certificate is >>> invalid. >>> >>> I just did a test with ssllabs and noticed that it shows this error: >>> "This server's certificate chain is incomplete. " >>> >>> Any ideas on what I have missed? Thanks for the assistance. >>> >>> Regards. >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From petros.fraser at gmail.com Tue Dec 30 22:13:00 2014 From: petros.fraser at gmail.com (Peter Fraser) Date: Tue, 30 Dec 2014 17:13:00 -0500 Subject: Error: This server's certificate chain is incomplete. In-Reply-To: References: Message-ID: Ok I'm going to try to figure out which it is so I can manually remove it. However I must say that I am a new believer in nginx. Good job on an excellent piece of software. On Tue, Dec 30, 2014 at 4:28 PM, Scott Larson wrote: > Contains anchor means that you're sending the CA's root cert along > with the intermediates and your own, and that it's generally unnecessary to > do so. > > > > *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 > 8238 ext. 1106310 943 2078 faxwww.wiredrive.com > www.twitter.com/wiredrive > www.facebook.com/wiredrive > * > > On Tue, Dec 30, 2014 at 1:18 PM, Peter Fraser > wrote: > >> Thanks for that. I found out also that I need to export all the >> intermediate certs. I used this command below to export them all to a text >> file. >> openssl pkcs12 -in .pfx -out outputfile.txt -nodes. >> >> Then I manually removed the private key and used the result as the cert. >> It works now except that the same test result says "Contains Anchor" This >> is not really a problem from what I have read but I will spend a little >> time trying to figure how to correct this also. >> >> On Tue, Dec 30, 2014 at 2:38 PM, Scott Larson wrote: >> >>> That test should point you in some direction but you're probably >>> missing an intermediate certificate which would normally be provided by the >>> issuer and appended to the file containing your server certificate. >>> >>> >>> >>> *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 >>> 8238 ext. 1106310 943 2078 faxwww.wiredrive.com >>> www.twitter.com/wiredrive >>> www.facebook.com/wiredrive >>> * >>> >>> On Tue, Dec 30, 2014 at 11:34 AM, Peter Fraser >>> wrote: >>> >>>> Hi All >>>> I managed to get the nginx reverse proxy up and forwarding to my https >>>> web server. >>>> I think I have missed something though as a user just let me know that >>>> when he tried to access the site he gets a message that the certificate is >>>> invalid. >>>> >>>> I just did a test with ssllabs and noticed that it shows this error: >>>> "This server's certificate chain is incomplete. " >>>> >>>> Any ideas on what I have missed? Thanks for the assistance. >>>> >>>> Regards. >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>>> >>> >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Dec 30 23:27:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 31 Dec 2014 02:27:17 +0300 Subject: Setting the SSL protocol used on proxy_pass? In-Reply-To: References: Message-ID: <20141230232717.GC37213@mdounin.ru> Hello! On Tue, Dec 30, 2014 at 09:44:17AM +0000, Edward Hibbert wrote: > I am trying to set up a reverse proxy which handles SSL. This is my first > time, so I may be doing something stupid. > > On the NGINX which is acting as a proxy I get this: > > SSL_do_handshake() failed (SSL: error:140770FC:SSL > routines:SSL23_GET_SERVER_HELLO:unknown protocol) while SSL handshaking to > upstream, > > On the NGINX which is upstream I am configured to only accept TLS, because > of recent SSL security problems. > > ssl_protocols TLSv1.2 TLSv1.1 TLSv1; > > I would guess that the problem here is that NGINX is opening the proxy > connection using the wrong SSL protocol. Is there a way to control which > protocol it uses for the proxy connection? There is the "proxy_ssl_protocols" directive to control which protocols are allowed while connecting to upstream HTTPS servers, see http://nginx.org/r/proxy_ssl_protocols for details. By default it allows SSLv3 and above, so it should be fine with the ssl_protocols you configured. The message you are seeing may appear if you've accidentally set "proxy_ssl_protocols SSLv3" though. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Dec 30 23:34:07 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Tue, 30 Dec 2014 18:34:07 -0500 Subject: Redirect to domain TTFB very slow Message-ID: <1b4e9260fc71e68d41707988172da758.NginxMailingListEnglish@forum.nginx.org> Hi On my Nginx server i use a domain "domain.com" and i have all files here: /home/nginxs/domains/mydomain.com/public There i have a folder named "gadgets" and i have some files there and i use a redirect to another domain for this folder. So if a user types seconddomain.com it goes to the folder gadgets in the first domains folder and get al results..... The problem is that is very slow (first domain is loading super fast !) and then checking i found this : So Time to first byte is very slow 6 seconds :( No idea how can fix this :( And i can't move that folder to the new created account that i redirect as the files inside gadgets are interacting with other files there from main account..... second domain config: server { listen 80; server_name mydomain.com; return 301 $scheme://www.mydomain.com$request_uri; } server { listen 80; server_name blog.mydomain.com; root /home/nginx/domains/firstdomain/public/blog; index index.php; access_log /var/log/nginx/blog.gogadget.gr_access.log; error_log /var/log/nginx/blog.gogadget.gr_error.log; location / { try_files $uri $uri/ /index.html /index.php?$args; } } server { listen 80; server_name www.mydomain.com dev.mydomain.com; root /home/nginx/domains/firstdomain.com/public; index index.php; access_log /var/log/nginx/mydomain.com_access.log; error_log /var/log/nginx/mydomain.com_error.log; location /go { return 301 http://www.mydomain.com/; } location / { try_files $uri $uri/ /index.html /index.php?$args; } location /blog/ { deny all; } error_page 500 502 504 /500.html; location ~* ^.+\.(?:css|cur|js|jpg|jpeg|gif|ico|png|html|xml|zip|rar|mp4|3gp|flv|webm|f4v|ogm)$ { access_log off; expires 30d; tcp_nodelay off; open_file_cache max=3000 inactive=120s; open_file_cache_valid 45s; open_file_cache_min_uses 2; open_file_cache_errors off; } location /api2/ { rewrite ^/api2/(.*)$ /api/public/index.php?route=$1 last; } location ~* /(uploads|public)/ { access_log off; expires 30d; } location ~ /\.ht { deny all; } include /usr/local/nginx/conf/staticfiles.conf; include /usr/local/nginx/conf/php.conf; include /usr/local/nginx/conf/drop.conf; include /usr/local/nginx/conf/block.conf; #include /usr/local/nginx/conf/errorpage.conf; } Any ideas? I am using ZendOpcache and Memcached... Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255913,255913#msg-255913 From steve at greengecko.co.nz Tue Dec 30 23:59:59 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 31 Dec 2014 12:59:59 +1300 Subject: Redirect to domain TTFB very slow In-Reply-To: <1b4e9260fc71e68d41707988172da758.NginxMailingListEnglish@forum.nginx.org> References: <1b4e9260fc71e68d41707988172da758.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1419983999.22960.56.camel@steve-new> On Tue, 2014-12-30 at 18:34 -0500, ASTRAPI wrote: > Hi > > On my Nginx server i use a domain "domain.com" and i have all files here: > > /home/nginxs/domains/mydomain.com/public > > > There i have a folder named "gadgets" and i have some files there and i use > a redirect to another domain for this folder. > > So if a user types seconddomain.com it goes to the folder gadgets in the > first domains folder and get al results..... > > The problem is that is very slow (first domain is loading super fast !) and > then checking i found this : > > So Time to first byte is very slow 6 seconds :( > > No idea how can fix this :( > > And i can't move that folder to the new created account that i redirect as > the files inside gadgets are interacting with other files there from main > account..... > > second domain config: > > server { > listen 80; > server_name mydomain.com; > return 301 $scheme://www.mydomain.com$request_uri; > } > server { > listen 80; > server_name blog.mydomain.com; > root /home/nginx/domains/firstdomain/public/blog; > index index.php; > access_log /var/log/nginx/blog.gogadget.gr_access.log; > error_log /var/log/nginx/blog.gogadget.gr_error.log; > location / { > try_files $uri $uri/ /index.html /index.php?$args; > } > } > server { > listen 80; > server_name www.mydomain.com dev.mydomain.com; > root /home/nginx/domains/firstdomain.com/public; > index index.php; > access_log /var/log/nginx/mydomain.com_access.log; > error_log /var/log/nginx/mydomain.com_error.log; > location /go { > return 301 http://www.mydomain.com/; > } > location / { > try_files $uri $uri/ /index.html /index.php?$args; > } > location /blog/ { > deny all; > } > error_page 500 502 504 /500.html; > location ~* > ^.+\.(?:css|cur|js|jpg|jpeg|gif|ico|png|html|xml|zip|rar|mp4|3gp|flv|webm|f4v|ogm)$ > { > access_log off; > expires 30d; > tcp_nodelay off; > open_file_cache max=3000 inactive=120s; > open_file_cache_valid 45s; > open_file_cache_min_uses 2; > open_file_cache_errors off; > } > location /api2/ { > rewrite ^/api2/(.*)$ /api/public/index.php?route=$1 last; > } > location ~* /(uploads|public)/ { > access_log off; > expires 30d; > } > location ~ /\.ht { > deny all; > } > include /usr/local/nginx/conf/staticfiles.conf; > include /usr/local/nginx/conf/php.conf; > include /usr/local/nginx/conf/drop.conf; > include /usr/local/nginx/conf/block.conf; > #include /usr/local/nginx/conf/errorpage.conf; > } > > Any ideas? > > I am using ZendOpcache and Memcached... > do you mean mydomain.com forwarding to www.mydomain.com (your anonymising is rather confusing)? This is the real TTFB, rather than the nginx redirect. To address that, you need to tune the server resources, allocating enough to the database and php-fpm. Assuming a virtual server, the most important thing to do is to avoid using disks wherever possible, which means allocating plenty of mem to the database ( also look at maybe using Percona and innodb to improve performance ), and caching whatever you can... If not using the latest versions of PHP, look at using APC as well ( start simple! ) and I find that redis performs better than memcached, but a tmpfs backed file system often is better than both ( on a single server platform ). Temporarily using something like new relic may also help you pinpoint your bottlenecks. If I've misunderstood, then maybe it's a DNS problem? Try ensuring all domain names are set in /etc/hosts ( and maybe set them to something in the 127.0.0.0/8 subnet to ensure no extra external traffic. But it's unlikely to be a nginx problem. hth Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Wed Dec 31 00:29:04 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Tue, 30 Dec 2014 19:29:04 -0500 Subject: Redirect to domain TTFB very slow In-Reply-To: <1419983999.22960.56.camel@steve-new> References: <1419983999.22960.56.camel@steve-new> Message-ID: Hi Thanks for your reply :) "do you mean mydomain.com forwarding to www.mydomain.com" No! I have two domains let's say domain1.com and domain2.com Both domain files are in public folder of domain1 and the domain2 is using a folder there named "gadgets" Now i was set one account for each domain on nginx but on domain2 i use a redirect to "gadgets" folder .... Yes it seems that there is a problem with an http request or dns overhead but no idea how to solve :( At /etc/hosts i have only the first domain.Do you think this is the problem and what exactly i must add there for the second domain? 127.0.0.1 localhost localhost.localdomain 37.187.xxx.xxx server.domain1.com server Both domains are using the same ip! Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255913,255915#msg-255915 From nginx-forum at nginx.us Wed Dec 31 01:20:55 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Tue, 30 Dec 2014 20:20:55 -0500 Subject: Redirect to domain TTFB very slow In-Reply-To: References: <1419983999.22960.56.camel@steve-new> Message-ID: <6c98b5eb92c5151e90647bcb62dbb548.NginxMailingListEnglish@forum.nginx.org> http://i59.tinypic.com/20jlrpv.jpg Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255913,255916#msg-255916 From steve at greengecko.co.nz Wed Dec 31 01:32:02 2014 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 31 Dec 2014 14:32:02 +1300 Subject: Redirect to domain TTFB very slow In-Reply-To: <6c98b5eb92c5151e90647bcb62dbb548.NginxMailingListEnglish@forum.nginx.org> References: <1419983999.22960.56.camel@steve-new> <6c98b5eb92c5151e90647bcb62dbb548.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1419989522.22960.57.camel@steve-new> On Tue, 2014-12-30 at 20:20 -0500, ASTRAPI wrote: > http://i59.tinypic.com/20jlrpv.jpg > Looks like it's server side processing problems then. -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From oscaretu at gmail.com Wed Dec 31 06:46:06 2014 From: oscaretu at gmail.com (oscaretu .) Date: Wed, 31 Dec 2014 07:46:06 +0100 Subject: Redirect to domain TTFB very slow In-Reply-To: <1b4e9260fc71e68d41707988172da758.NginxMailingListEnglish@forum.nginx.org> References: <1b4e9260fc71e68d41707988172da758.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello Can you use tools like *strace* or *sysdig* to see where the time is being lost? Greetings, Oscar On Wed, Dec 31, 2014 at 12:34 AM, ASTRAPI wrote: > Hi > > On my Nginx server i use a domain "domain.com" and i have all files here: > > /home/nginxs/domains/mydomain.com/public > > > There i have a folder named "gadgets" and i have some files there and i use > a redirect to another domain for this folder. > > So if a user types seconddomain.com it goes to the folder gadgets in the > first domains folder and get al results..... > > The problem is that is very slow (first domain is loading super fast !) and > then checking i found this : > > So Time to first byte is very slow 6 seconds :( > > No idea how can fix this :( > > And i can't move that folder to the new created account that i redirect as > the files inside gadgets are interacting with other files there from main > account..... > > second domain config: > > server { > listen 80; > server_name mydomain.com; > return 301 $scheme://www.mydomain.com$request_uri; > } > server { > listen 80; > server_name blog.mydomain.com; > root /home/nginx/domains/firstdomain/public/blog; > index index.php; > access_log /var/log/nginx/blog.gogadget.gr_access.log; > error_log /var/log/nginx/blog.gogadget.gr_error.log; > location / { > try_files $uri $uri/ /index.html /index.php?$args; > } > } > server { > listen 80; > server_name www.mydomain.com dev.mydomain.com; > root /home/nginx/domains/firstdomain.com/public; > index index.php; > access_log /var/log/nginx/mydomain.com_access.log; > error_log /var/log/nginx/mydomain.com_error.log; > location /go { > return 301 http://www.mydomain.com/; > } > location / { > try_files $uri $uri/ /index.html /index.php?$args; > } > location /blog/ { > deny all; > } > error_page 500 502 504 /500.html; > location ~* > > ^.+\.(?:css|cur|js|jpg|jpeg|gif|ico|png|html|xml|zip|rar|mp4|3gp|flv|webm|f4v|ogm)$ > { > access_log off; > expires 30d; > tcp_nodelay off; > open_file_cache max=3000 inactive=120s; > open_file_cache_valid 45s; > open_file_cache_min_uses 2; > open_file_cache_errors off; > } > location /api2/ { > rewrite ^/api2/(.*)$ /api/public/index.php?route=$1 last; > } > location ~* /(uploads|public)/ { > access_log off; > expires 30d; > } > location ~ /\.ht { > deny all; > } > include /usr/local/nginx/conf/staticfiles.conf; > include /usr/local/nginx/conf/php.conf; > include /usr/local/nginx/conf/drop.conf; > include /usr/local/nginx/conf/block.conf; > #include /usr/local/nginx/conf/errorpage.conf; > } > > Any ideas? > > I am using ZendOpcache and Memcached... > > Thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,255913,255913#msg-255913 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Dec 31 13:02:33 2014 From: nginx-forum at nginx.us (ASTRAPI) Date: Wed, 31 Dec 2014 08:02:33 -0500 Subject: Redirect to domain TTFB very slow In-Reply-To: References: Message-ID: <5b9c8c081e40b6e3a47f352848f4412b.NginxMailingListEnglish@forum.nginx.org> You can see it i think here: http://i59.tinypic.com/20jlrpv.jpg Or not? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255913,255921#msg-255921