From lists-nginx at swsystem.co.uk Sun Feb 1 00:29:20 2015 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Sun, 01 Feb 2015 00:29:20 +0000 Subject: Will this work, is it the best way? In-Reply-To: References: Message-ID: <54CD7360.5020907@swsystem.co.uk> To add a bit more information. Server1 is hosted, has both IPv4 and 6. Server2 is on the end of an adsl line within a RFC1918 network where the external port 80 points to another server which cannot be changed. Another complication is that internet access from some client locations is heavily restricted, so using an alternative port may not work. The LAN has full IPv6 capability and there's some video streaming done on server2 so direct LAN access is preferred. I managed to get it working with the below config on server2. server { listen [::]:80; server_name ~^(?\w+)\.example\.com$; set $upstream "http://$subdomain"; location / { proxy_set_header Host $host; proxy_pass $upstream; } } upstream netflow { server 127.0.0.1:8080; } upstream ntop { server 127.0.0.1:3000; } upstream wifi { server 127.0.0.1:8443; } I was initially tripped up by the proxy_pass line, I did not get the desired effect when using 'proxy_pass http://$subdomain;' Steve. On 31/01/15 16:21, Lloyd Chang wrote: > Hello Steve, > > ? Best answer is try and see if it meets your expectations; thanks > > ? While reading your snippet, my initial questions are ? Why 2 > servers? Why not simplify? > > ? In your proposal: server1, listen to ?? TCP port(s) on public IPv4, > and IPv6 to proxy_pass to server2, then server2 listen > on public IPv6, and IPv4 to proxy_pass to subdomain, with upstream > (perhaps for load balance and/or failover?) ? As you agree, this is > slightly complicated > > ? Why not simplify? ? Reconfigure DNS for cname-server1 to > server2, for IPv4 and IPv6 > > ? In your snippet, server2 supports IPv4 and IPv6 if you expect it to > upstream via private IPv4 127.0.0.1:[?] > > ? I don't fully understand why server2 upstream isn't IPv6 ::1:[?] > considering your primary intent for server2 is IPv6 usage > > ? Perhaps you meant upstream localhost:[?] to try both IPv4 and IPv6? > Thanks > > Cheers, > Lloyd > > On Friday, January 30, 2015, Steve Wilson > wrote: > > Hi, > > Slightly complicated setup with 2 nginx servers. > > server1 has a public ipv4 address using proxy_pass to server2 over > ipv6 which only has a public ipv6, this then has various upstreams > for each subdomain. > > ipv6 capable browsers connect directly to server2, those with only > ipv4 will connect via server1. > > I'm currently considering something like the below config. > > > server1 - proxy all subdomain requests to upstream ipv6 server: > > http { > server_name *.example.com ; > location / { > proxy_pass http://fe80::1337; > } > } > > server2: > > http { > server_name ~^(?\w+)\.example\.com$; > location / { > proxy_pass http://$subdomain > } > > upstream subdomain1 { > server 127.0.0.1:1234 ; > } > } > > The theory here is that each subdomain and upstream would match, > meaning that when adding another upstream it would just need the > upstream{} block configuring and automatically work. > > I realise there's dns stuff etc but that's out of scope for this > list and I can deal with that. > > Does this seem sound? It's not going to see major usage but > hopefully this will reduce work when adding new upstreams. > > If you've a better way to achieve this please let me know. > > Steve. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 1 04:25:15 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Sat, 31 Jan 2015 23:25:15 -0500 Subject: buffering / uploading large files Message-ID: <8af2bb945788506581c665875d829b5c.NginxMailingListEnglish@forum.nginx.org> Hi, how can tell nginx not to buffer client's requests? I need this capability to upload files larger than the nginx's max buffering size. I got an nginx unknown directive error when I tried the fastcgi_request_buffering directive. Is the directive supported and I am missing a module in my nginx build? I am running nginx 1.7.9. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256378#msg-256378 From kurt at x64architecture.com Sun Feb 1 06:01:20 2015 From: kurt at x64architecture.com (Kurt Cancemi) Date: Sun, 1 Feb 2015 01:01:20 -0500 Subject: buffering / uploading large files In-Reply-To: <8af2bb945788506581c665875d829b5c.NginxMailingListEnglish@forum.nginx.org> References: <8af2bb945788506581c665875d829b5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: It's a planned feature see (http://trac.nginx.org/nginx/roadmap) But it has no ETA. Kurt Cancemi https://www.x64architecture.com > On Jan 31, 2015, at 11:25 PM, nginxuser100 wrote: > > Hi, how can tell nginx not to buffer client's requests? I need this > capability to upload files larger than the nginx's max buffering size. I got > an nginx unknown directive error when I tried the fastcgi_request_buffering > directive. Is the directive supported and I am missing a module in my nginx > build? I am running nginx 1.7.9. Thank you! > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256378#msg-256378 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Feb 1 09:54:23 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Sun, 01 Feb 2015 04:54:23 -0500 Subject: buffering / uploading large files In-Reply-To: References: Message-ID: Thanks Kurt. In the meantime, is there a way to access the patch? I was not able to access the link to a patch mentioned in this email thread http://trac.nginx.org/nginx/ticket/251 Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256380#msg-256380 From nginx-forum at nginx.us Sun Feb 1 15:56:37 2015 From: nginx-forum at nginx.us (173279834462) Date: Sun, 01 Feb 2015 10:56:37 -0500 Subject: SSL3_CTX_CTRL:called a function you should not call Message-ID: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> nginx 1.6.2 + libressl 2.1.3 >tail -f [...]/port-443/*.log ==> stderr.log <== 2015/02/01 01:35:34 [alert] 15134#0: worker process 15139 exited on signal 11 2015/02/01 01:35:34 [alert] 15134#0: shared memory zone "SSL" was locked by 15139 2015/02/01 01:35:42 [alert] 15134#0: worker process 15138 exited on signal 11 2015/02/01 01:35:42 [alert] 15134#0: shared memory zone "SSL" was locked by 15138 2015/02/01 01:35:49 [alert] 15134#0: worker process 15140 exited on signal 11 2015/02/01 01:35:49 [alert] 15134#0: shared memory zone "SSL" was locked by 15140 2015/02/01 01:36:20 [alert] 15134#0: worker process 15584 exited on signal 11 2015/02/01 01:36:20 [alert] 15134#0: shared memory zone "SSL" was locked by 15584 2015/02/01 01:36:27 [alert] 15134#0: worker process 15586 exited on signal 11 2015/02/01 01:36:27 [alert] 15134#0: shared memory zone "SSL" was locked by 15586 2015/02/01 01:36:34 [alert] 15134#0: worker process 15585 exited on signal 11 2015/02/01 01:36:34 [alert] 15134#0: shared memory zone "SSL" was locked by 15585 >tail -f [...]/vhost_123/port-443/*.log ==> stderr.log <== 2015/02/01 01:36:13 [alert] 15584#0: *54 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443 2015/02/01 01:36:20 [alert] 15586#0: *55 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443 2015/02/01 01:36:27 [alert] 15585#0: *56 ignoring stale global SSL error (SSL: error:14085042:SSL routines:SSL3_CTX_CTRL:called a function you should not call) while SSL handshaking, client: 127.0.0.1, server: 0.0.0.0:443 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,256381#msg-256381 From nginx-forum at nginx.us Sun Feb 1 16:05:49 2015 From: nginx-forum at nginx.us (173279834462) Date: Sun, 01 Feb 2015 11:05:49 -0500 Subject: patch to src/event/ngx_event_openssl.c (nginx 1.6.2) Message-ID: <8869a6a9b36c1882b1c0e791c9246ca6.NginxMailingListEnglish@forum.nginx.org> nginx-1.6.2 >make [...] src/event/ngx_event_openssl.c:2520:9: error: implicit declaration of function 'RAND_pseudo_bytes' is invalid in C99 [-Werror,-Wimplicit-function-declaration] RAND_pseudo_bytes(iv, 16); ^ 1 error generated. patch: perl -i.bak -0p -e 's|(^#include ).*(typedef struct)|$1\n#include \n\nint RAND_bytes\(unsigned char \*buf, int num\);\nint RAND_pseudo_bytes\(unsigned char \*buf, int num\);\n\n$2|ms' ./src/event/ngx_event_openssl.c; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256382,256382#msg-256382 From nginx-forum at nginx.us Sun Feb 1 16:10:10 2015 From: nginx-forum at nginx.us (173279834462) Date: Sun, 01 Feb 2015 11:10:10 -0500 Subject: EC_GOST_2012_Test (warning) Message-ID: <4d3a365e9d9079ff482ad66c00fd633b.NginxMailingListEnglish@forum.nginx.org> nginx-1.6.2 >make [...] ec/ec_curve.c:2918:2: warning: unused variable '_EC_GOST_2012_Test' [-Wunused-const-variable] _EC_GOST_2012_Test = { ^ 1 warning generated. Perhaps its defining block is best moved to the 1.7 branch. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256383,256383#msg-256383 From nginx-forum at nginx.us Sun Feb 1 16:32:51 2015 From: nginx-forum at nginx.us (173279834462) Date: Sun, 01 Feb 2015 11:32:51 -0500 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> References: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> Message-ID: <35913bf3c1b160d635004fd020ff3ff7.NginxMailingListEnglish@forum.nginx.org> "no OpenSSL types or functions are exposed." http://www.openbsd.org/papers/eurobsdcon2014-libressl.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,256384#msg-256384 From kurt at x64architecture.com Sun Feb 1 17:48:57 2015 From: kurt at x64architecture.com (Kurt Cancemi) Date: Sun, 1 Feb 2015 12:48:57 -0500 Subject: buffering / uploading large files In-Reply-To: References: Message-ID: I have scratched up a patch from changes in tengine. I haven?t tested it but it should work. The patch applies against nginx 1.7.9. Here's the docs on the options: http://tengine.taobao.org/document/http_core.html $ wget http://nginx.org/download/nginx-1.7.9.tar.gz $ tar -xzvf nginx-1.7.9.tar.gz $ wget https://raw.githubusercontent.com/x64architecture/ngx_nonbuffered/master/nginx-1.7.9.patch $ patch -p0 < nginx-1.7.9.patch --- Kurt Cancemi https://www.x64architecture.com On Sun, Feb 1, 2015 at 4:54 AM, nginxuser100 wrote: > Thanks Kurt. > In the meantime, is there a way to access the patch? I was not able to > access the link to a patch mentioned in this email thread > http://trac.nginx.org/nginx/ticket/251 > Thanks. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256380#msg-256380 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sun Feb 1 19:04:21 2015 From: nginx-forum at nginx.us (rafaelr) Date: Sun, 01 Feb 2015 14:04:21 -0500 Subject: Slow downloads over SSL Message-ID: <1c14d29058f047441bb3d80b38d2b3f2.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to find answers to a problem that I'm currently experiencing in all my servers. Downloads offered over HTTPS are at least 4 times slower than those delivered over HTTP. All these servers are running nginx/1.6.2. Here is my nginx.conf in case someone have experienced something similar and could give me a hint. By the way, when I say 4 x slower I'm being optimistic... I can download 4-5MB/s over HTTP while https download are 600-700kb/s the fastest I've seen. user www-data; worker_processes 2; pid /run/nginx.pid; worker_rlimit_nofile 4096; events { worker_connections 1024; multi_accept on; use epoll; } http { # SSL Configuration ################### ssl_buffer_size 8k; ssl_session_cache shared:SSL_CACHE:20m; ssl_session_timeout 4h; ssl_session_tickets on; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM; ssl_prefer_server_ciphers on; # Custom Settings ################# open_file_cache max=10000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; charset UTF-8; client_body_buffer_size 128K; client_header_buffer_size 1k; client_max_body_size 25m; large_client_header_buffers 4 8k; fastcgi_buffers 16 16k; fastcgi_buffer_size 32k; fastcgi_read_timeout 120s; client_body_timeout 20; client_header_timeout 20; keepalive_timeout 25; send_timeout 20; reset_timedout_connection on; # Basic Settings ################ sendfile on; tcp_nopush on; tcp_nodelay on; types_hash_max_size 2048; server_tokens off; server_names_hash_bucket_size 64; server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; # Logging Settings ################## access_log off; error_log /var/log/nginx/error.log; # Gzip Settings ############### gzip on; #gzip_disable "msie6"; gzip_disable "MSIE [1-6]\.(?!.*SV1)"; gzip_vary on; gzip_proxied any; gzip_comp_level 5; gzip_buffers 16 8k; gzip_http_version 1.1; gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/javascript; # Virtual Host Configs ###################### include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256386,256386#msg-256386 From reallfqq-nginx at yahoo.fr Sun Feb 1 20:44:15 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 1 Feb 2015 21:44:15 +0100 Subject: Slow downloads over SSL In-Reply-To: <1c14d29058f047441bb3d80b38d2b3f2.NginxMailingListEnglish@forum.nginx.org> References: <1c14d29058f047441bb3d80b38d2b3f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Your config snippet does not say a thing about how your server(s) handle HTTP and HTTPS. Do they serve the same content the same way? Where are performance details (including network trace)? --- *B. R.* On Sun, Feb 1, 2015 at 8:04 PM, rafaelr wrote: > Hi, > > I'm trying to find answers to a problem that I'm currently experiencing in > all my servers. Downloads offered over HTTPS are at least 4 times slower > than those delivered over HTTP. All these servers are running nginx/1.6.2. > Here is my nginx.conf in case someone have experienced something similar > and > could give me a hint. By the way, when I say 4 x slower I'm being > optimistic... I can download 4-5MB/s over HTTP while https download are > 600-700kb/s the fastest I've seen. > > user www-data; > worker_processes 2; > pid /run/nginx.pid; > worker_rlimit_nofile 4096; > > events { > worker_connections 1024; > multi_accept on; > use epoll; > } > > http { > > # SSL Configuration > ################### > ssl_buffer_size 8k; > ssl_session_cache shared:SSL_CACHE:20m; > ssl_session_timeout 4h; > ssl_session_tickets on; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM; > ssl_prefer_server_ciphers on; > > > # Custom Settings > ################# > > open_file_cache max=10000 inactive=20s; > open_file_cache_valid 30s; > open_file_cache_min_uses 2; > open_file_cache_errors on; > charset UTF-8; > > client_body_buffer_size 128K; > client_header_buffer_size 1k; > client_max_body_size 25m; > large_client_header_buffers 4 8k; > > fastcgi_buffers 16 16k; > fastcgi_buffer_size 32k; > fastcgi_read_timeout 120s; > > client_body_timeout 20; > client_header_timeout 20; > keepalive_timeout 25; > send_timeout 20; > reset_timedout_connection on; > > > # Basic Settings > ################ > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > types_hash_max_size 2048; > server_tokens off; > > server_names_hash_bucket_size 64; > server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > > # Logging Settings > ################## > > access_log off; > error_log /var/log/nginx/error.log; > > > # Gzip Settings > ############### > > gzip on; > #gzip_disable "msie6"; > gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > gzip_vary on; > gzip_proxied any; > gzip_comp_level 5; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/plain text/css application/json > application/x-javascript > text/xml application/xml application/xml+rss text/javascript > application/javascript; > > > # Virtual Host Configs > ###################### > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256386,256386#msg-256386 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wplatnick at gmail.com Sun Feb 1 23:47:15 2015 From: wplatnick at gmail.com (Will Platnick) Date: Sun, 1 Feb 2015 18:47:15 -0500 Subject: Weird nginx SSL connection issues Message-ID: OK, I have an incredibly weird nginx connection issue. I have a cluster of boxes that are responsible for terminating SSL requests and passing them to a local haproxy instance for further routing. I have corosync/pacemaker setup to manage the IP addresses and failover instances if there?s an issue. This server has been running fine for a long time, but we recently had to reboot because of the GHOST stuff. Before we did that, we did an apt-get upgrade to get to the latest Debian Wheezy packages, including a new nginx (1.6.2), openssl, kernel, and just about After that happened, we started seeing connection issues to the nginx that does SSL termination. We When it was happening, about 50% of our requests were timing out (iOS/Android clients). I was testing manually using curl when it was happening, and we were seeing huge fluctuations in the time it takes to connect. I saw a lot of connections just timing out completely, in combination with connections take 1s, 3s, 15s, 30s, etc? When this issue was happening to nginx, haproxy on the same box was unaffected, tested by curling every second from a box close to it, logging the results and verifying results. So, it seemed to just be SSL with nginx. Now that our peak load is down, it?s not as big an issue, but we are still seeing connection issues when I curl, just more like 1-3s typically, just not as many. Since we?ve had some time to experiment, I?ve gathered more information that makes no sense to me. Almost all the traffic was setup to go to the address managed by corosync. When I setup my curl tests to run every second, I see the timeouts. SO, I tried something. I bound the main ip address of the NIC to nginx, reloaded, and redid the same test, but pointed the curl to go to the main ip address. As soon as I did that, my curl tests never saw a single issue and the connect phase never takes more than 2ms and no timeouts. So, I started thinking it was the corosync IP, so I sent all our traffic to go to the main nic ip address that just tested fine, and once the normal traffic levels switched over to main nic, I started seeing curl timeouts now that it had traffic. So, I then started curling the IP from corosync that used to be primary, and now IT has no connection issues. So, I have connection issues to nginx but only on the IP address that takes the traffic. nginx on a different IP on the same NIC is fine. haproxy on the same NIC as fine. What the heck? Struggling to think of anything I could tweak. This doesn?t make sense, but I have triple checked my info, and it?s legit. -------------- next part -------------- An HTML attachment was scrubbed... URL: From admin at grails.asia Mon Feb 2 01:50:57 2015 From: admin at grails.asia (jtan) Date: Mon, 2 Feb 2015 09:50:57 +0800 Subject: Slow downloads over SSL In-Reply-To: <1c14d29058f047441bb3d80b38d2b3f2.NginxMailingListEnglish@forum.nginx.org> References: <1c14d29058f047441bb3d80b38d2b3f2.NginxMailingListEnglish@forum.nginx.org> Message-ID: Which algorithm you use? On Mon, Feb 2, 2015 at 3:04 AM, rafaelr wrote: > Hi, > > I'm trying to find answers to a problem that I'm currently experiencing in > all my servers. Downloads offered over HTTPS are at least 4 times slower > than those delivered over HTTP. All these servers are running nginx/1.6.2. > Here is my nginx.conf in case someone have experienced something similar > and > could give me a hint. By the way, when I say 4 x slower I'm being > optimistic... I can download 4-5MB/s over HTTP while https download are > 600-700kb/s the fastest I've seen. > > user www-data; > worker_processes 2; > pid /run/nginx.pid; > worker_rlimit_nofile 4096; > > events { > worker_connections 1024; > multi_accept on; > use epoll; > } > > http { > > # SSL Configuration > ################### > ssl_buffer_size 8k; > ssl_session_cache shared:SSL_CACHE:20m; > ssl_session_timeout 4h; > ssl_session_tickets on; > ssl_protocols TLSv1 TLSv1.1 TLSv1.2; > ssl_ciphers ALL:!ADH:!EXP:!LOW:!RC2:!3DES:!SEED:!RC4:+HIGH:+MEDIUM; > ssl_prefer_server_ciphers on; > > > # Custom Settings > ################# > > open_file_cache max=10000 inactive=20s; > open_file_cache_valid 30s; > open_file_cache_min_uses 2; > open_file_cache_errors on; > charset UTF-8; > > client_body_buffer_size 128K; > client_header_buffer_size 1k; > client_max_body_size 25m; > large_client_header_buffers 4 8k; > > fastcgi_buffers 16 16k; > fastcgi_buffer_size 32k; > fastcgi_read_timeout 120s; > > client_body_timeout 20; > client_header_timeout 20; > keepalive_timeout 25; > send_timeout 20; > reset_timedout_connection on; > > > # Basic Settings > ################ > > sendfile on; > tcp_nopush on; > tcp_nodelay on; > types_hash_max_size 2048; > server_tokens off; > > server_names_hash_bucket_size 64; > server_name_in_redirect off; > > include /etc/nginx/mime.types; > default_type application/octet-stream; > > > # Logging Settings > ################## > > access_log off; > error_log /var/log/nginx/error.log; > > > # Gzip Settings > ############### > > gzip on; > #gzip_disable "msie6"; > gzip_disable "MSIE [1-6]\.(?!.*SV1)"; > gzip_vary on; > gzip_proxied any; > gzip_comp_level 5; > gzip_buffers 16 8k; > gzip_http_version 1.1; > gzip_types text/plain text/css application/json > application/x-javascript > text/xml application/xml application/xml+rss text/javascript > application/javascript; > > > # Virtual Host Configs > ###################### > > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256386,256386#msg-256386 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Freelance Grails and Java developer -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 2 06:35:58 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Mon, 02 Feb 2015 01:35:58 -0500 Subject: buffering / uploading large files In-Reply-To: References: Message-ID: Thanks Kurt. The patch compiled and got installed fine. I no longer get an unknown directive error msg. However, the client's POST request of 1.5M of data still gives me this error "413 Request Entity Too Large" even though I added "fastcgi_request_buffering off;" location / { include fastcgi_params; fastcgi_request_buffering off; fastcgi_pass 127.0.0.1:9000; } Here was my test: curl -X POST -T testfile -v 'http://localhost:80/' > POST /testfile HTTP/1.1 > User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Host: localhost > Accept: */* > Content-Length: 1474560 > Expect: 100-continue > < HTTP/1.1 413 Request Entity Too Large < Server: nginx/1.7.9 < Date: Sun, 01 Feb 2015 20:04:25 GMT < Content-Type: text/html < Content-Length: 198 < Connection: close Has anyone tried the fastcgi_request_buffering ... or am I missing something? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256390#msg-256390 From nginx-forum at nginx.us Mon Feb 2 13:08:09 2015 From: nginx-forum at nginx.us (tigran.bayburtsyan) Date: Mon, 02 Feb 2015 08:08:09 -0500 Subject: How to Handle "data sent" or "connection closed" event in Module ? Message-ID: <104d786d1d06993e2bc165381f529002.NginxMailingListEnglish@forum.nginx.org> Hi. I'm developing Nginx module where I need to handle some function when all data have been sent to client or when client closed the connection. For example I have ngx_http_finalize_request(r, ngx_http_output_filter ( r , out_chain )); Where out_chain contains over 700KB of data. I can't find where to add function to handle event that all 700kb have been sent to client, or client closed the connection. As I understand all that 700kb data Nginx not sending at once it will take some Nginx loops to be sent. So is there any function or event to handle "data sent" event ? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256393,256393#msg-256393 From reallfqq-nginx at yahoo.fr Mon Feb 2 17:46:44 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 2 Feb 2015 18:46:44 +0100 Subject: buffering / uploading large files In-Reply-To: References: Message-ID: http://nginx.org/en/docs/http/ngx_http_core_module.html#client_max_body_size --- *B. R.* On Mon, Feb 2, 2015 at 7:35 AM, nginxuser100 wrote: > Thanks Kurt. > > The patch compiled and got installed fine. I no longer get an unknown > directive error msg. However, the client's POST request of 1.5M of data > still gives me this error "413 Request Entity Too Large" > > even though I added "fastcgi_request_buffering off;" > > location / { > include fastcgi_params; > fastcgi_request_buffering off; > fastcgi_pass 127.0.0.1:9000; > } > > Here was my test: > curl -X POST -T testfile -v 'http://localhost:80/' > > > POST /testfile HTTP/1.1 > > User-Agent: curl/7.19.7 (i386-redhat-linux-gnu) libcurl/7.19.7 > NSS/3.16.2.3 Basic ECC zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > > Host: localhost > > Accept: */* > > Content-Length: 1474560 > > Expect: 100-continue > > > < HTTP/1.1 413 Request Entity Too Large > < Server: nginx/1.7.9 > < Date: Sun, 01 Feb 2015 20:04:25 GMT > < Content-Type: text/html > < Content-Length: 198 > < Connection: close > > Has anyone tried the fastcgi_request_buffering ... or am I missing > something? Thanks. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256378,256390#msg-256390 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Feb 2 19:56:34 2015 From: nginx-forum at nginx.us (ericr) Date: Mon, 02 Feb 2015 14:56:34 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: <686a176ae0ea3dd6006b91861842fb8c.NginxMailingListEnglish@forum.nginx.org> Prior to this issue starting, we had not changed our ciphers in several months. I have tried changing them once since. We have also tried restarting nginx several times on each server to clear the cache, but it has not helped. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256406#msg-256406 From nginx-forum at nginx.us Mon Feb 2 20:26:29 2015 From: nginx-forum at nginx.us (tempspace) Date: Mon, 02 Feb 2015 15:26:29 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: <686a176ae0ea3dd6006b91861842fb8c.NginxMailingListEnglish@forum.nginx.org> References: <686a176ae0ea3dd6006b91861842fb8c.NginxMailingListEnglish@forum.nginx.org> Message-ID: My first question is do these I have been fighting a similar issue with SSL handshake issues for the past few days. After reboots and upgrades for GHOST, we started seeing errors like this in our error logs constantly: *579 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, in conjunction with an elevated error rate in client requests to nginx in the initial connection phase. I'm not completely sure if the two issues are correlated to be honest, I'm still in the troubleshooting process. I am on a Debian Wheezy system and it started happening with the libssl package 1.0.1e-2+deb7u13 and continues with u14. As soon as I rolled back libssl to u12 and restarted nginx, the logging of errors goes away. I then tested ssl to make sure we weren't vulnerable to POODLE or Heartbleed, and it's all clear. I would recommend trying to go back a few versions in libssl, restarting nginx and see if that helps, making sure you're not leaving yourself open to the major vulnerabilities. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256407#msg-256407 From nginx-forum at nginx.us Tue Feb 3 03:38:24 2015 From: nginx-forum at nginx.us (xinghua_hi) Date: Mon, 02 Feb 2015 22:38:24 -0500 Subject: subrequest cycle cause cpu 100% Message-ID: <3e6091b4c142b13599d9aae64839eca7.NginxMailingListEnglish@forum.nginx.org> hello, I use error_page = /500.html to show myself 500 page, and for some reason, I need ssi include in 500.html, for example: 500.html if fastcgi upstream return 500 response code, it will cause subrequest cycle, a set number of error logs such as "subrequests cycle while processing "xxx" while sending response to client". but if I add "wait" parameter in ssi 500.html I find endless error log such as "subrequest cycle" in error_log, and cpu go up to 100% I wonder why wait parameter cause so strange case, thanks best regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256410,256410#msg-256410 From nginx-forum at nginx.us Tue Feb 3 06:26:48 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 03 Feb 2015 01:26:48 -0500 Subject: buffering / uploading large files In-Reply-To: References: Message-ID: <7cd7b925e982e1e2a1742e1c95115e06.NginxMailingListEnglish@forum.nginx.org> Hi, the situation that I am trying to solve is what happens if the client's request is larger than the configured client_max_body_size. Turning off buffering by nginx should resolve the problem as nginx would forward every packet to the back-end server as it comes in. Did I misunderstand the purpose of "fastcgi_request_buffering off;"? Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256411#msg-256411 From nginx-forum at nginx.us Tue Feb 3 06:31:18 2015 From: nginx-forum at nginx.us (malintha) Date: Tue, 03 Feb 2015 01:31:18 -0500 Subject: 13: Permission denied while connect server using https through nginx Message-ID: <26d7dec846529bcaca5c6d7f081e8e22.NginxMailingListEnglish@forum.nginx.org> Hi, My nginx configurations are as follows. I am going to connect running server UI from my machine through nginx. server { listen 443; server_name mgt.wso2bps.malintha.com; ssl on; ssl_certificate /etc/nginx/ssl/wso2bps.crt; ssl_certificate_key /etc/nginx/ssl/wso2bps.key; location / { proxy_pass https://10.1.1.1:9443/carbon; } } Server is running and I can access the UI directly from my computer. But when I access it through nginx it goves me 502 bad gateway on browser and following error in nginx error log 2015/02/03 13:15:43 [crit] 5721#0: *7 connect() to 10.1.1.1:9443 failed (13: Permission denied) while connecting to upstream, client: 10.174.14.28, server: mgt.wso2bps.malintha.com, request: "GET / HTTP/1.1", upstream: "https://10.1.1.1:9443/carbon", host: "mgt.wso2bps.malintha.com" I tried run server/stop server but both incidents gives same error in nginx side. What may be the reason for this error? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256412,256412#msg-256412 From igor at sysoev.ru Tue Feb 3 06:50:22 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 3 Feb 2015 09:50:22 +0300 Subject: 13: Permission denied while connect server using https through nginx In-Reply-To: <26d7dec846529bcaca5c6d7f081e8e22.NginxMailingListEnglish@forum.nginx.org> References: <26d7dec846529bcaca5c6d7f081e8e22.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 03 Feb 2015, at 09:31, malintha wrote: > My nginx configurations are as follows. I am going to connect running server > UI from my machine through nginx. > > server { > listen 443; > server_name mgt.wso2bps.malintha.com; > ssl on; > ssl_certificate /etc/nginx/ssl/wso2bps.crt; > ssl_certificate_key /etc/nginx/ssl/wso2bps.key; > > location / { > proxy_pass https://10.1.1.1:9443/carbon; > > } > } > > > Server is running and I can access the UI directly from my computer. But > when I access it through nginx it goves me 502 bad gateway on browser and > following error in nginx error log > > > 2015/02/03 13:15:43 [crit] 5721#0: *7 connect() to 10.1.1.1:9443 failed (13: > Permission denied) while connecting to upstream, client: 10.174.14.28, > server: mgt.wso2bps.malintha.com, request: "GET / HTTP/1.1", upstream: > "https://10.1.1.1:9443/carbon", host: "mgt.wso2bps.malintha.com" > > I tried run server/stop server but both incidents gives same error in nginx > side. What may be the reason for this error? It may be SELinux: http://nginx.com/blog/nginx-se-linux-changes-upgrading-rhel-6-6/ -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Tue Feb 3 08:31:38 2015 From: nginx-forum at nginx.us (malintha) Date: Tue, 03 Feb 2015 03:31:38 -0500 Subject: 13: Permission denied while connect server using https through nginx In-Reply-To: <26d7dec846529bcaca5c6d7f081e8e22.NginxMailingListEnglish@forum.nginx.org> References: <26d7dec846529bcaca5c6d7f081e8e22.NginxMailingListEnglish@forum.nginx.org> Message-ID: <10d96d3347d710d7c578bc6b2c35d946.NginxMailingListEnglish@forum.nginx.org> Hi, Yes it was because of selinux. it configured it and now working fine.thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256412,256415#msg-256415 From nginx-forum at nginx.us Tue Feb 3 09:04:34 2015 From: nginx-forum at nginx.us (malintha) Date: Tue, 03 Feb 2015 04:04:34 -0500 Subject: "unable to find valid certification path to requested target" error in java based server while accessing it through Nginx Message-ID: <95159e87fd43cf88c7d8c043c5081ce3.NginxMailingListEnglish@forum.nginx.org> Hi, I have two java based servers. We call it server A and server B. I can call directly from A to B server. Then I configured call server A to server B though nginx instance. Then I get following error. TID: [0] [AM] [2015-02-03 15:17:05,736] INFO {org.apache.axis2.transport.http.HTTPSender} - Unable to sendViaPost to url[https://wso2is.malintha.com/services/AuthenticationAdmin] {org.apache.axis2.transport.http.HTTPSender} org.apache.axis2.AxisFault: javax.net.ssl.SSLException: Connection has been shutdown: javax.net.ssl.SSLHandshakeException: sun.security.validator.ValidatorException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target at org.apache.axis2.AxisFault.makeFault(AxisFault.java:430) at org.apache.axis2.transport.http.SOAPMessageFormatter.writeTo(SOAPMessageFormatter.java:78) at org.apache.axis2.transport.http.AxisRequestEntity.writeRequest(AxisRequestEntity.java:84) at org.apache.commons.httpclient.methods.EntityEnclosingMethod.writeRequestBody(EntityEnclosingMethod.java:499) at org.apache.commons.httpclient.HttpMethodBase.writeRequest(HttpMethodBase.java:2114) at org.apache.commons.httpclient.HttpMethodBase.execute(HttpMethodBase.java:1096) at org.apache.commons.httpclient.HttpMethodDirector.executeWithRetry(HttpMethodDirector.java:398) at org.apache.commons.httpclient.HttpMethodDirector.executeMethod(HttpMethodDirector.java:171) at org.apache.commons.httpclient.HttpClient.executeMethod(HttpClient.java:397) at org.apache.axis2.transport.http.AbstractHTTPSender.executeMethod(AbstractHTTPSender.java:622) at org.apache.axis2.transport.http.HTTPSender.sendViaPost(HTTPSender.java:193) at org.apache.axis2.transport.http.HTTPSender.send(HTTPSender.java:75) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.writeMessageWithCommons(CommonsHTTPTransportSender.java:451) at org.apache.axis2.transport.http.CommonsHTTPTransportSender.invoke(CommonsHTTPTransportSender.java:278) at org.apache.axis2.engine.AxisEngine.send(AxisEngine.java:442) at org.apache.axis2.description.OutInAxisOperationClient.send(OutInAxisOperation.java:430) at org.apache.axis2.description.OutInAxisOperationClient.executeImpl(OutInAxisOperation.java:225) at org.apache.axis2.client.OperationClient.execute(OperationClient.java:149) at org.wso2.carbon.authenticator.stub.AuthenticationAdminStub.login(AuthenticationAdminStub.java:659) at org.wso2.carbon.apimgt.hostobjects.APIProviderHostObject.jsFunction_login(APIProviderHostObject.java:172) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.mozilla.javascript.MemberBox.invoke(MemberBox.java:126) at org.mozilla.javascript.FunctionObject.call(FunctionObject.java:386) at org.mozilla.javascript.optimizer.OptRuntime.call2(OptRuntime.java:42) at org.jaggeryjs.rhino.publisher.modules.user.c1._c_anonymous_1(/publisher/modules/user/login.jag:18) at org.jaggeryjs.rhino.publisher.modules.user.c1.call(/publisher/modules/user/login.jag) Accroding to my understanding I have to put nginx certificate into B servers trustore. This blogpost mention the solution if we use external java client http://evanthika.blogspot.com/2014/01/how-to-solve-pkix-path-building-failed.html. But here we call from nginx instead of java client. What could be reason for this Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256416,256416#msg-256416 From nginxuser at abv.bg Tue Feb 3 09:11:34 2015 From: nginxuser at abv.bg (peter petrov) Date: Tue, 3 Feb 2015 11:11:34 +0200 (EET) Subject: help Message-ID: <1765074635.57548.1422954694073.JavaMail.apache@nm3.abv.bg> hi , I have been trying to persuade nginx into serving html.gz file with no success for past three days. I tried probably everything written in the web. I am using CentOS 7 minimal and nginx 1.7.9 mainline. I configured it with http-gzip_static_module. I tried chmod 755 the installation folder I copied and paste all I can find about this in conf file. I chown of the installation folder.I zipped index.html with tar and zip made index.html.tar.gz index.tar.gz index.gx index also index.zip file. All I received was 403 forbidden. The best I could achieved was wen compressed it and named it just index then I received some gibberish output in the browser. I am attaching the nginx.conf file. Any assistance will be much appreciated. Best regards, ilian -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 3969 bytes Desc: not available URL: From vbart at nginx.com Tue Feb 3 12:47:05 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 03 Feb 2015 15:47:05 +0300 Subject: buffering / uploading large files In-Reply-To: <7cd7b925e982e1e2a1742e1c95115e06.NginxMailingListEnglish@forum.nginx.org> References: <7cd7b925e982e1e2a1742e1c95115e06.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1477381.LLRrWjNp9l@vbart-workstation> On Tuesday 03 February 2015 01:26:48 nginxuser100 wrote: > Hi, the situation that I am trying to solve is what happens if the client's > request is larger than the configured client_max_body_size. Turning off > buffering by nginx should resolve the problem as nginx would forward every > packet to the back-end server as it comes in. Did I misunderstand the > purpose of "fastcgi_request_buffering off;"? Thanks. > Then all you need is to configure reasonable client_max_body_size. The main purpose of this directive is to protect your server from uploading unlimited amount of data. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue Feb 3 13:59:14 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Feb 2015 16:59:14 +0300 Subject: subrequest cycle cause cpu 100% In-Reply-To: <3e6091b4c142b13599d9aae64839eca7.NginxMailingListEnglish@forum.nginx.org> References: <3e6091b4c142b13599d9aae64839eca7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150203135914.GZ99511@mdounin.ru> Hello! On Mon, Feb 02, 2015 at 10:38:24PM -0500, xinghua_hi wrote: > hello, > > I use error_page = /500.html to show myself 500 page, and for some > reason, I need ssi include in 500.html, for example: > > 500.html > > > > > > if fastcgi upstream return 500 response code, it will cause > subrequest cycle, a set number of error logs such as "subrequests cycle > while processing "xxx" while sending response to client". > > but if I add "wait" parameter in ssi > > 500.html > > > > > > I find endless error log such as "subrequest cycle" in error_log, > and cpu go up to 100% > > I wonder why wait parameter cause so strange case, thanks With the "wait" parameter you'll eventually reach the limit as well, but it will take longer as there will be no other subrequests executed in parallel. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Feb 3 15:24:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Feb 2015 18:24:45 +0300 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> References: <8a8d26bd95d82c0bcaf887ef0fde0020.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150203152444.GA99511@mdounin.ru> Hello! On Sun, Feb 01, 2015 at 10:56:37AM -0500, 173279834462 wrote: > nginx 1.6.2 + libressl 2.1.3 If you want to use nginx with LibreSSL, consider using nginx 1.7.x (1.7.4 at least). Also make sure to actually compile nginx with LibreSSL, not just loading LibreSSL library instead of OpenSSL. There is no binary compatibility between the two, and segmentation faults are expected if you'll just switch one for another. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Feb 3 16:34:05 2015 From: nginx-forum at nginx.us (173279834462) Date: Tue, 03 Feb 2015 11:34:05 -0500 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <20150203152444.GA99511@mdounin.ru> References: <20150203152444.GA99511@mdounin.ru> Message-ID: <1f689141963c3558c5fc7b91a9cdd505.NginxMailingListEnglish@forum.nginx.org> I am coming precisely from nginx 1.7.9 + libressl 2.1.3, configured as you mentioned. As 1.7.9 kept crashing, we downgraded to "stable" 1.6.4. Chapter closed then. We are back to 1.7.9... P.S. Did anybody note that the login to the forum does not use https? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256381,256423#msg-256423 From mdounin at mdounin.ru Tue Feb 3 17:20:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 3 Feb 2015 20:20:44 +0300 Subject: SSL3_CTX_CTRL:called a function you should not call In-Reply-To: <1f689141963c3558c5fc7b91a9cdd505.NginxMailingListEnglish@forum.nginx.org> References: <20150203152444.GA99511@mdounin.ru> <1f689141963c3558c5fc7b91a9cdd505.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150203172044.GB99511@mdounin.ru> Hello! On Tue, Feb 03, 2015 at 11:34:05AM -0500, 173279834462 wrote: > I am coming precisely from nginx 1.7.9 + libressl 2.1.3, configured as you > mentioned. > > As 1.7.9 kept crashing, we downgraded to "stable" 1.6.4. > > Chapter closed then. We are back to 1.7.9... If you see problems with nginx 1.7.9, consider following hints at http://wiki.nginx.org/Debugging. > P.S. Did anybody note that the login to the forum does not use https? Consider using mailing list instead, see http://nginx.org/en/support.html. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue Feb 3 18:03:24 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 3 Feb 2015 18:03:24 +0000 Subject: help In-Reply-To: <1765074635.57548.1422954694073.JavaMail.apache@nm3.abv.bg> References: <1765074635.57548.1422954694073.JavaMail.apache@nm3.abv.bg> Message-ID: <20150203180324.GD3125@daoine.org> On Tue, Feb 03, 2015 at 11:11:34AM +0200, peter petrov wrote: Hi there, > I have been trying to persuade nginx into serving html.gz > I configured it with http-gzip_static_module. http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html What request do you make? What response do you want? What response do you get? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue Feb 3 18:18:25 2015 From: nginx-forum at nginx.us (ericr) Date: Tue, 03 Feb 2015 13:18:25 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: I just finished running an experiment that has shed some light on the issue. It has not yet been solved though. I setup another nginx server with the same configuration with an upstream app that always responds with HTTP 200. I included JS on each page load in production to make a single request to this server. I ran tcpdump on the test server and what I found was very interesting. Client connections producing the above "inappropriate fallback" on the test server all appear to do some form of the following: (Client and Server successfully complete 3-way handshake) Client: Client Hello TLSv1.2 Server: RST Client: ACK Server: RST (Client and Server successfully complete 3-way handshake) Client: Client Hello TLSv1.1 Server: RST Client: ACK Server: RST (Client and Server successfully complete 3-way handshake) Client: Client Hello TLSv1.0 Server: Encrypted Alert (Content Type: Alert (21)) (Client sends RST, which the server acknowledges, and the connection ends) I don't know what the alert is, but I can only assume it's related to TLS_FALLBACK_SCSV since the client closes the connection right after. What's interesting here is that there is little consistency to these RSTs. Sometimes a client downgrades to TSLv1.1 before getting the Encrypted Alert (Content Type: Alert(21)). Sometimes a client tries the same version over and over again, each time getting an RST from the server, and eventually gives up. Later many of these IP addresses are observed establishing successful connections. Am I correct to assume Nginx is sending these RST packets? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256427#msg-256427 From nginx-forum at nginx.us Tue Feb 3 19:04:39 2015 From: nginx-forum at nginx.us (tempspace) Date: Tue, 03 Feb 2015 14:04:39 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: <1e9edb77d082dc4e6774fb05d03aad8d.NginxMailingListEnglish@forum.nginx.org> Eric, Did you try to downgrade your libssl to the previous version I mentioned earlier? Would love to hear if your issues go away. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256428#msg-256428 From luky-37 at hotmail.com Tue Feb 3 20:41:03 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Tue, 3 Feb 2015 21:41:03 +0100 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: , Message-ID: > I just finished running an experiment that has shed some light on the issue. > It has not yet been solved though. > > I setup another nginx server with the same configuration with an upstream > app that always responds with HTTP 200. I included JS on each page load in > production to make a single request to this server. > > I ran tcpdump on the test server and what I found was very interesting. > Client connections producing the above "inappropriate fallback" on the test > server all appear to do some form of the following: > > (Client and Server successfully complete 3-way handshake) > Client: Client Hello TLSv1.2 > Server: RST > Client: ACK > Server: RST > (Client and Server successfully complete 3-way handshake) > Client: Client Hello TLSv1.1 > Server: RST > Client: ACK > Server: RST > (Client and Server successfully complete 3-way handshake) > Client: Client Hello TLSv1.0 > Server: Encrypted Alert (Content Type: Alert (21)) > (Client sends RST, which the server acknowledges, and the connection ends) Can you reliably reproduce this with specific client software or networks? Can you upload a pcap file this failed handshake somewhere for further inspection? From lists at ruby-forum.com Tue Feb 3 22:26:30 2015 From: lists at ruby-forum.com (Rajesh Gangula) Date: Tue, 03 Feb 2015 23:26:30 +0100 Subject: Need help to configure nginx-varnish-php s Message-ID: <5ae89191c19481bd6e6fa2533aa43906@ruby-forum.com> Hi All, I am newbie to nginx and would need help configuring nginx-varnish-php-fpm setup to allow Our current setup. Nginx Varnish running in same server Php-fpm running on different server. Varnish is configured to load balance between php-nodes. We would like to find a way to access specific php-nodes on demand instead of getting through the load balancer, is there a way we can do that with the following setup. This is more of troubleshooting purpose so we dont want to remove varnish load balancing. I am thinking something like when the request to nginx comes as a specific url it hits specific php node. $ cat /etc/nginx/conf.d/upstreams.conf upstream php-be { server php-vip.prod..com:9001; } ================== location ~* \.php$ { add_header Access-Control-Allow-Origin "*"; include fastcgi_params; fastcgi_index index.php; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param SERVER_PORT $forwarded_server_port; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; fastcgi_param REMOTE_ADDR X.X.X.X; fastcgi_param HTTPS $using_https; fastcgi_pass php-be; } ====================== Any help would be greatly appreciated. Thanks -Rajesh -- Posted via http://www.ruby-forum.com/. From steve at greengecko.co.nz Tue Feb 3 23:19:46 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Wed, 04 Feb 2015 12:19:46 +1300 Subject: Need help to configure nginx-varnish-php s In-Reply-To: <5ae89191c19481bd6e6fa2533aa43906@ruby-forum.com> References: <5ae89191c19481bd6e6fa2533aa43906@ruby-forum.com> Message-ID: <1423005586.14176.60.camel@steve-new> On Tue, 2015-02-03 at 23:26 +0100, Rajesh Gangula wrote: > Hi All, > > I am newbie to nginx and would need help configuring > nginx-varnish-php-fpm setup to allow > > Our current setup. > > Nginx Varnish running in same server > Php-fpm running on different server. > > Varnish is configured to load balance between php-nodes. > > We would like to find a way to access specific php-nodes on demand > instead of getting through the load balancer, is there a way we can do > that with the following setup. This is more of troubleshooting purpose > so we dont want to remove varnish load balancing. > > I am thinking something like when the request to nginx comes as a > specific url it hits specific php node. > > > > > > $ cat /etc/nginx/conf.d/upstreams.conf > upstream php-be { > > server php-vip.prod..com:9001; > > } > > > > > > ================== > location ~* \.php$ { > add_header Access-Control-Allow-Origin "*"; > > include fastcgi_params; > fastcgi_index index.php; > > fastcgi_split_path_info ^(.+\.php)(/.+)$; > fastcgi_param SERVER_PORT $forwarded_server_port; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > fastcgi_param REMOTE_ADDR X.X.X.X; > fastcgi_param HTTPS $using_https; > > fastcgi_pass php-be; > } > > ====================== > > Any help would be greatly appreciated. > > Thanks > -Rajesh > Untested but off the top of my head... If you want to use a specific backend for a specific URL, then I'd configure upstream definitions for all of them separately, and then set a map up to define a variable containing the name of which one to fastcgi_pass to. ( You could define the default to a separate upstream definition that connects to them all in a load balanced manner? ) I would also put as many of those fastcgi_params into an include file as possible - makes the config easier to understand. Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Wed Feb 4 02:42:48 2015 From: nginx-forum at nginx.us (ericr) Date: Tue, 03 Feb 2015 21:42:48 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: <1e9edb77d082dc4e6774fb05d03aad8d.NginxMailingListEnglish@forum.nginx.org> References: <1e9edb77d082dc4e6774fb05d03aad8d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <956417419eb38f9b3f3ace44e2bd22f9.NginxMailingListEnglish@forum.nginx.org> The errors went away, and now the only errors I see in our logs relating to SSL are handshake timeouts when I turn debug logs on. Now that I think about it, though, isn't this to be expected? The errors immediately went away as soon as I downgraded far enough back to a version of OpenSSL that didn't support TLS_FALLBACK_SCSV. That doesn't address why the connections are getting reset and clients are downgrading in the first place, though. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256434#msg-256434 From nginx-forum at nginx.us Wed Feb 4 02:48:19 2015 From: nginx-forum at nginx.us (tempspace) Date: Tue, 03 Feb 2015 21:48:19 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: <956417419eb38f9b3f3ace44e2bd22f9.NginxMailingListEnglish@forum.nginx.org> References: <1e9edb77d082dc4e6774fb05d03aad8d.NginxMailingListEnglish@forum.nginx.org> <956417419eb38f9b3f3ace44e2bd22f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: You are absolutely correct, but I figured you would want a working environment while we work with nginx/openssl on figuring out how to fix this bug. Knowing that it worked for you also increases my own comfort that the issue is mitigated on my side and I won't have performance issues at my next peak time. Thank you so much for the pcap stuff, I'm sure the information you will provide to Lukas will be invaluable! Way to lead the charge! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256435#msg-256435 From nginx-forum at nginx.us Wed Feb 4 09:11:01 2015 From: nginx-forum at nginx.us (justink101) Date: Wed, 04 Feb 2015 04:11:01 -0500 Subject: Multiple proxy_pass destinations from a single location block Message-ID: <4c78b559b271bc8b74d53549706447c1.NginxMailingListEnglish@forum.nginx.org> Is it possible to specify multiple proxy_pass destinations from a single location block? Currently we have: location ~ ^/v1/?(?.+)? { resolver 208.67.222.222 208.67.220.220 valid=300s; resolver_timeout 10s; proxy_intercept_errors off; proxy_hide_header Vary; proxy_set_header Host "foo.mydomain.io"; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass https://foo.mydomain.io/api/$url; proxy_connect_timeout 10s; proxy_read_timeout 60s; proxy_ssl_session_reuse on; proxy_ssl_trusted_certificate /etc/pki/tls/certs/ca-bundle.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK !SRP !DSS"; } I'd like to keep all the above logic, but ALSO set up another proxy_pass from this location block to say bar.mydomain.com. What is the best way to do this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256437,256437#msg-256437 From igor at sysoev.ru Wed Feb 4 09:17:33 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Wed, 4 Feb 2015 12:17:33 +0300 Subject: Multiple proxy_pass destinations from a single location block In-Reply-To: <4c78b559b271bc8b74d53549706447c1.NginxMailingListEnglish@forum.nginx.org> References: <4c78b559b271bc8b74d53549706447c1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <432B3FA5-F05C-43A1-ADE9-418DE9A2116F@sysoev.ru> On 04 Feb 2015, at 12:11, justink101 wrote: > Is it possible to specify multiple proxy_pass destinations from a single > location block? Currently we have: > > location ~ ^/v1/?(?.+)? { > resolver 208.67.222.222 208.67.220.220 valid=300s; > resolver_timeout 10s; > proxy_intercept_errors off; > proxy_hide_header Vary; > proxy_set_header Host "foo.mydomain.io"; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_pass https://foo.mydomain.io/api/$url; > proxy_connect_timeout 10s; > proxy_read_timeout 60s; > proxy_ssl_session_reuse on; > proxy_ssl_trusted_certificate /etc/pki/tls/certs/ca-bundle.crt; > proxy_ssl_verify on; > proxy_ssl_verify_depth 2; > proxy_ssl_ciphers "EECDH+ECDSA+AESGCM EECDH+aRSA+AESGCM > EECDH+ECDSA+SHA384 EECDH+ECDSA+SHA256 EECDH+aRSA+SHA384 EECDH+aRSA+SHA256 > EECDH+aRSA+RC4 EECDH EDH+aRSA RC4 !aNULL !eNULL !LOW !3DES !MD5 !EXP !PSK > !SRP !DSS"; > } > > I'd like to keep all the above logic, but ALSO set up another proxy_pass > from this location block to say bar.mydomain.com. What is the best way to do > this? The first thing you do not need regex here: location /v1/ { proxy_pass http://foo.mydomain.io/api/; ... } is enough. To add another server to proxy_pass you can use upstream: upstream mydomain { server foo.mydomain.io; server bar.mydomain.com; } server { ... location /v1/ { proxy_pass http://mydomain/api/; ... } -- Igor Sysoev http://nginx.com From nginx-forum at nginx.us Wed Feb 4 13:23:33 2015 From: nginx-forum at nginx.us (rafaelr) Date: Wed, 04 Feb 2015 08:23:33 -0500 Subject: Slow downloads over SSL In-Reply-To: References: Message-ID: B.R. They are serving exactly the same resources at the same time... My vhost points to the same folders for each domain. Files are accessible over HTTP and HTTPS. The slow down comes when downloading (the same resource) from HTTPS. For example: http://webmail.domain.tld/test.zip (30MB file can be downloaded at 4-5MB/s) https://webmail.domain.tld/test.zip (30MB file can be downloaded at 300-700kb/s) I enabled SPDY to see if that would've made a difference but it didn't. I do have SPDY currently enabled. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256386,256440#msg-256440 From nginx-forum at nginx.us Wed Feb 4 13:29:45 2015 From: nginx-forum at nginx.us (rafaelr) Date: Wed, 04 Feb 2015 08:29:45 -0500 Subject: Slow downloads over SSL In-Reply-To: References: Message-ID: <02cd68736ef317a9f9370156f0a3d350.NginxMailingListEnglish@forum.nginx.org> jtan, The connection is encrypted using AES_256_CBC, with SHA1 for message authentication and ECDHE_RSA as the key exchange mechanism. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256386,256442#msg-256442 From reallfqq-nginx at yahoo.fr Wed Feb 4 17:54:49 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 4 Feb 2015 18:54:49 +0100 Subject: Slow downloads over SSL In-Reply-To: References: Message-ID: Nothing in the configuration part you provided rings any bell to me on why this is going on. I suggest you take a deeper look at the server level, see if there is not something that might have an impact there. Also, the usual recommended process to seek for the source of the trouble is to find what triggers is either by: - Starting from the minimal configuration serving your files over both protocols and, provided the problem disappeared, progressively add directives again until it triggers - Starting from your current configuration, progressively remove tweaking directives until you reach the minimal configuration or the problem disappears If you got a minimal working example still affected with that problem, I suggest you provide us with it (after having anonymized what appears sensitive to you). If it can be reproducted, then it might be something we missed or a bug. Happy digging! :o) --- *B. R.* On Wed, Feb 4, 2015 at 2:23 PM, rafaelr wrote: > B.R. > > They are serving exactly the same resources at the same time... My vhost > points to the same folders for each domain. Files are accessible over HTTP > and HTTPS. The slow down comes when downloading (the same resource) from > HTTPS. For example: > > http://webmail.domain.tld/test.zip (30MB file can be downloaded at > 4-5MB/s) > https://webmail.domain.tld/test.zip (30MB file can be downloaded at > 300-700kb/s) > > I enabled SPDY to see if that would've made a difference but it didn't. I > do > have SPDY currently enabled. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256386,256440#msg-256440 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at getrailo.org Wed Feb 4 19:07:13 2015 From: igal at getrailo.org (Igal @ getRailo.org) Date: Wed, 04 Feb 2015 11:07:13 -0800 Subject: Request Id for logging purposes Message-ID: <54D26DE1.1020907@getrailo.org> hi, I want to be able to identify requests by a unique id. I'm able to do this in the location section: proxy_set_header X-Request-Id $pid-$msec-$remote_addr-$request_length; but I want instead to set it to a variable, e.g. in the http section: set $request_id $pid-$msec-$remote_addr-$request_length; ## does this work? add_header X-Request-Id $request_id; ## I don't see it in the response headers can it work? TIA! From francis at daoine.org Wed Feb 4 19:15:32 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Feb 2015 19:15:32 +0000 Subject: Request Id for logging purposes In-Reply-To: <54D26DE1.1020907@getrailo.org> References: <54D26DE1.1020907@getrailo.org> Message-ID: <20150204191532.GE3125@daoine.org> On Wed, Feb 04, 2015 at 11:07:13AM -0800, Igal @ getRailo.org wrote: Hi there, > I want to be able to identify requests by a unique id. > > I'm able to do this in the location section: > > proxy_set_header X-Request-Id > $pid-$msec-$remote_addr-$request_length; > > but I want instead to set it to a variable, e.g. in the http section: > > set $request_id $pid-$msec-$remote_addr-$request_length; ## does > this work? http://nginx.org/r/set No. It can be at "server" level, though. > add_header X-Request-Id $request_id; ## I don't see it in > the response headers You probably were not running with this configuration. > can it work? With "set" at server{} level, yes. All of the normal nginx configuration directive rules apply. f -- Francis Daly francis at daoine.org From igal at getrailo.org Wed Feb 4 19:22:48 2015 From: igal at getrailo.org (Igal @ getRailo.org) Date: Wed, 04 Feb 2015 11:22:48 -0800 Subject: Request Id for logging purposes In-Reply-To: <20150204191532.GE3125@daoine.org> References: <54D26DE1.1020907@getrailo.org> <20150204191532.GE3125@daoine.org> Message-ID: <54D27188.7050006@getrailo.org> thank you for your prompt reply. yes, it works in the server section, but my access_log is in the http section, and I want to use it in the access_log should I just add *access_log* |off|; in the http section and add the access_log to the server section instead? thanks On 2/4/2015 11:15 AM, Francis Daly wrote: > On Wed, Feb 04, 2015 at 11:07:13AM -0800, Igal @ getRailo.org wrote: > > Hi there, > >> I want to be able to identify requests by a unique id. >> >> I'm able to do this in the location section: >> >> proxy_set_header X-Request-Id >> $pid-$msec-$remote_addr-$request_length; >> >> but I want instead to set it to a variable, e.g. in the http section: >> >> set $request_id $pid-$msec-$remote_addr-$request_length; ## does >> this work? > http://nginx.org/r/set > > No. > > It can be at "server" level, though. > >> add_header X-Request-Id $request_id; ## I don't see it in >> the response headers > You probably were not running with this configuration. > >> can it work? > With "set" at server{} level, yes. > > All of the normal nginx configuration directive rules apply. > > f -- Igal Sapir Railo Core Developer http://getRailo.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From igal at getrailo.org Wed Feb 4 19:33:32 2015 From: igal at getrailo.org (Igal @ getRailo.org) Date: Wed, 04 Feb 2015 11:33:32 -0800 Subject: Request Id for logging purposes In-Reply-To: <54D27188.7050006@getrailo.org> References: <54D26DE1.1020907@getrailo.org> <20150204191532.GE3125@daoine.org> <54D27188.7050006@getrailo.org> Message-ID: <54D2740C.9030301@getrailo.org> it's ok. I guess it works like that without moving the access_log directive. so I have the set $request_id directive in the server section, and then it is used properly afterwards in the http section. thank you! On 2/4/2015 11:22 AM, Igal @ getRailo.org wrote: > thank you for your prompt reply. > > yes, it works in the server section, but my access_log is in the http > section, and I want to use it in the access_log > > should I just add > > *access_log* |off|; > > in the http section and add the access_log to the server section instead? > > thanks > > > On 2/4/2015 11:15 AM, Francis Daly wrote: >> On Wed, Feb 04, 2015 at 11:07:13AM -0800, Igal @ getRailo.org wrote: >> >> Hi there, >> >>> I want to be able to identify requests by a unique id. >>> >>> I'm able to do this in the location section: >>> >>> proxy_set_header X-Request-Id >>> $pid-$msec-$remote_addr-$request_length; >>> >>> but I want instead to set it to a variable, e.g. in the http section: >>> >>> set $request_id $pid-$msec-$remote_addr-$request_length; ## does >>> this work? >> http://nginx.org/r/set >> >> No. >> >> It can be at "server" level, though. >> >>> add_header X-Request-Id $request_id; ## I don't see it in >>> the response headers >> You probably were not running with this configuration. >> >>> can it work? >> With "set" at server{} level, yes. >> >> All of the normal nginx configuration directive rules apply. >> >> f > > -- > Igal Sapir > Railo Core Developer > http://getRailo.org/ -- Igal Sapir Railo Core Developer http://getRailo.org/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginxuser at abv.bg Wed Feb 4 21:36:04 2015 From: nginxuser at abv.bg (peter petrov) Date: Wed, 4 Feb 2015 23:36:04 +0200 (EET) Subject: nginx dows't serve compressed static .html files. is it a bug? Message-ID: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> In my previous post I explained what tried and didn't manage to it. I attached nginx.conf file. I'll be very grateful is somebody show me a successful way to do this. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Wed Feb 4 22:22:58 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Wed, 4 Feb 2015 14:22:58 -0800 Subject: [ANN] OpenResty 1.7.7.2 released Message-ID: Hi folks! I am happy to announce the new formal release, 1.7.7.2, of the OpenResty bundle: http://openresty.org/#Download The highlights of this release are 1. the SSL/TLS support in the websocket client of lua-resty-websocket. 2. an enhanced version of "resty" command-line utility supporting user command-line arguments and some more handy options. Special thanks go to all our contributors and users for making this happen! Below is the complete change log for this release, as compared to the last formal release (1.7.7.1): * bundled the "resty" command-line utility (version 0.01) from the resty-cli project: https://github.com/openresty/resty-cli * bugfix: the resty utility could not start when the nginx was built with "./configure --conf-path=PATH" where "PATH" was not "conf/nginx.conf". thanks Zhengyi Lai for the report. * feature: added support for user-supplied arguments which the user Lua scripts can access via the global Lua table "arg", just as in the "lua" and "luajit" command-line utilities. thanks Guanlan Dai for the patch. * feature: added new command-line option "--nginx=PATH" to allow the user to explicitly specify the underlying nginx executable being invoked by this script. thanks Guanlan Dai for the patch. * feature: added support for multiple "-I" options to specify more than one user library search paths. thanks Guanlan Dai for the patch. * feature: print out resty's own version number when the -V option is specified. * feature: resty: added new options "--valgrind" and "--valgrind-opts=OPTS". * upgraded the ngx_set_misc module to 0.28. * feature: added the set_base32_alphabet config directive to allow the user to specify the alphabet used for base32 encoding/decoding. thanks Vladislav Manchev for the patch. * bugfix: set_quote_sql_str: we incorrectly escaped 0x1a to "\z" instead of "\Z". * change: the old set_misc_base32_padding directive is now deprecated; use set_base32_padding instead. * upgraded the ngx_lua module to 0.9.14. * bugfix: ngx.re.gsub/ngx.re.sub incorrectly swallowed the character right after a 0-width match that happens to be the last match. thanks Guanlan Dai for the patch. * bugfix: tcpsock:setkeepalive(): we did not check "NULL" connection pointers properly, which might lead to segmentation faults. thanks Yang Yue for the report. * bugfix: ngx.quote_str_str() incorrectly escaped "\026" to "\z" while "\Z" is expected. thanks laodouya for the original patch. * bugfix: ngx.timer.at: fixed a small memory leak in case of running out of memory (which should be extremely rare though). * optimize: minor optimizations in timers. * feature: added the Lua global variable "__ngx_cycle" which is a lightuserdata holding the current "ngx_cycle_t" pointer, which may simplify some FFI-based Lua extensions. * doc: added a warning for the "share_all_vars" option for ngx.location.capture*. * upgraded the lua-resty-core library to 0.1.0. * bugfix: resty.core.regex: data corruptions might happen when recursively calling ngx.re.gsub via the user replacement function argument because of incorrect reusing a globally shared captures Lua table. thanks James Hurst for the report. * bugfix: ngx.re.gsub: garbage might get injected into gsub result when "ngx.*" API functions are called inside the user callback function for the replacement. thanks James Hurst for the report. * feature: resty.core.base: added the "FFI_BUSY" constant for "NGX_BUSY". * upgraded the lua-resty-lrucache library to 0.04. * bugfix: resty.lrucache.pureffi: set(): it did not update to the new value at all if the key had an existing value (either stale or not). thanks Shuxin Yang for the patch. * upgraded the lua-resty-websocket library to 0.05. * feature: resty.websocket.client: added support for SSL/TLS connections (i.e., the "wss://" scheme). thanks Vladislav Manchev for the patch. * doc: mentioned the bitop library dependency when using the standard Lua 5.1 interpreter (this is not needed for LuaJIT because it is already built in). thanks Laurent Arnoud for the patch. * upgraded LuaJIT to v2.1-20150120: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest changes: * bugfix: don't compile "IR_RETF" after "CALLT" to ff with side effects. * bugfix: fix "BC_UCLO"/"BC_JMP" join optimization in Lua parser. * bugfix: fix corner case in string to number conversion. * bugfix: x86: fix argument checks for "ipairs()" iterator. * bugfix: gracefully handle "lua_error()" for a suspended coroutine. * x86/x64: Drop internal x87 math functions. Use libm functions. * x86/x64: Call external symbols directly from interpreter code. (except for ELF/x86 PIC, where it's easier to use wrappers.) * ARM: Minor interpreter optimization. * x86: Minor interpreter optimization. * PPC/e500: Drop support for this architecture. * MIPS: Fix excess stack growth in interpreter. * PPC: Fix excess stack growth in interpreter. * ARM: Fix excess stack growth in interpreter. * ARM: Fix write barrier check in "BC_USETS". * ARM64: Add build infrastructure and initial port of interpreter. * OpenBSD/x86: Better executable memory allocation for W^X mode. * bugfix: the "ngx_http_redis" module failed to compile when the "ngx_gzip" module was disabled. thanks anod221 for the report. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1007007 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org And we have always been running the latest OpenResty in CloudFlare's global CDN network for years. Enjoy! -agentzh From francis at daoine.org Wed Feb 4 22:53:20 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Feb 2015 22:53:20 +0000 Subject: nginx dows't serve compressed static .html files. is it a bug? In-Reply-To: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> References: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> Message-ID: <20150204225320.GF3125@daoine.org> On Wed, Feb 04, 2015 at 11:36:04PM +0200, peter petrov wrote: > In my previous post I explained what tried and didn't manage to it. I > attached nginx.conf file. I'll > be very grateful is somebody show me a successful way to do this. http://nginx.org/r/gzip_static gzip_static on; curl -i --compressed http://localhost/file.html will get file.html.gz if it exists, with "Content-Encoding: gzip"; or file.html if that exists, or 404. Check the "bytes sent" field in your access_log to see which was sent. Omit the "--compressed" in the curl command to see the difference. -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Feb 5 05:08:10 2015 From: nginx-forum at nginx.us (exilemirror) Date: Thu, 05 Feb 2015 00:08:10 -0500 Subject: Ports monitoring Message-ID: Hi guys, I have a site configured to run on port 443 only, where can we see if the request coming in are using port 443 instead of port 80? Can we capture the traffic live? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256458,256458#msg-256458 From ar at xlrs.de Thu Feb 5 07:25:07 2015 From: ar at xlrs.de (Axel) Date: Thu, 05 Feb 2015 08:25:07 +0100 Subject: Ports monitoring In-Reply-To: References: Message-ID: <7302237.KBXKqltcVk@lxrs> Hi, if you use unix/linux severs with root access you can use tcpdump -i $interface port 443 Regards, Axel Am Donnerstag, 5. Februar 2015, 00:08:10 schrieb exilemirror: > Hi guys, > > > I have a site configured to run on port 443 only, where can we see if the > request coming in are using port 443 instead of port 80? > Can we capture the traffic live? > > Thank you. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256458,256458#msg-256458 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Feb 5 08:26:08 2015 From: nginx-forum at nginx.us (justink101) Date: Thu, 05 Feb 2015 03:26:08 -0500 Subject: Multiple proxy_pass destinations from a single location block In-Reply-To: <432B3FA5-F05C-43A1-ADE9-418DE9A2116F@sysoev.ru> References: <432B3FA5-F05C-43A1-ADE9-418DE9A2116F@sysoev.ru> Message-ID: Thanks Igor. What if one of the servers listed in the upstream block should be over https and the other over http? How is this done using upstream proxies { server foo.mydomain.io; server bar.mydomain.com; } proxy_pass https://proxies/api/; Notice the proxy pass defines only a single scheme (https). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256437,256461#msg-256461 From batuhangoksu at gmail.com Thu Feb 5 08:47:49 2015 From: batuhangoksu at gmail.com (=?UTF-8?Q?Batuhan_G=C3=B6ksu?=) Date: Thu, 5 Feb 2015 10:47:49 +0200 Subject: [ANN] OpenResty 1.7.7.2 released In-Reply-To: References: Message-ID: ?There are many great new features. Why "lua" has not been updated ???? "lua 5.3" output only "lua 5.1" supported the cause ??? 2015-02-05 0:22 GMT+02:00 Yichun Zhang (agentzh) : > Hi folks! > > I am happy to announce the new formal release, 1.7.7.2, of the OpenResty > bundle: > > http://openresty.org/#Download > > The highlights of this release are > > 1. the SSL/TLS support in the websocket client of lua-resty-websocket. > > 2. an enhanced version of "resty" command-line utility supporting user > command-line arguments and some more handy options. > > Special thanks go to all our contributors and users for making this happen! > > Below is the complete change log for this release, as compared to the > last formal release (1.7.7.1): > > * bundled the "resty" command-line utility (version 0.01) from the > resty-cli project: https://github.com/openresty/resty-cli > > * bugfix: the resty utility could not start when the nginx was > built with "./configure --conf-path=PATH" where "PATH" was > not "conf/nginx.conf". thanks Zhengyi Lai for the report. > > * feature: added support for user-supplied arguments which the > user Lua scripts can access via the global Lua table "arg", > just as in the "lua" and "luajit" command-line utilities. > thanks Guanlan Dai for the patch. > > * feature: added new command-line option "--nginx=PATH" to > allow the user to explicitly specify the underlying nginx > executable being invoked by this script. thanks Guanlan Dai > for the patch. > > * feature: added support for multiple "-I" options to specify > more than one user library search paths. thanks Guanlan Dai > for the patch. > > * feature: print out resty's own version number when the -V > option is specified. > > * feature: resty: added new options "--valgrind" and > "--valgrind-opts=OPTS". > > * upgraded the ngx_set_misc module to 0.28. > > * feature: added the set_base32_alphabet config directive to > allow the user to specify the alphabet used for base32 > encoding/decoding. thanks Vladislav Manchev for the patch. > > * bugfix: set_quote_sql_str: we incorrectly escaped 0x1a to > "\z" instead of "\Z". > > * change: the old set_misc_base32_padding directive is now > deprecated; use set_base32_padding instead. > > * upgraded the ngx_lua module to 0.9.14. > > * bugfix: ngx.re.gsub/ngx.re.sub incorrectly swallowed the > character right after a 0-width match that happens to be the > last match. thanks Guanlan Dai for the patch. > > * bugfix: tcpsock:setkeepalive(): we did not check "NULL" > connection pointers properly, which might lead to > segmentation faults. thanks Yang Yue for the report. > > * bugfix: ngx.quote_str_str() incorrectly escaped "\026" to > "\z" while "\Z" is expected. thanks laodouya for the > original patch. > > * bugfix: ngx.timer.at: fixed a small memory leak in case of > running out of memory (which should be extremely rare > though). > > * optimize: minor optimizations in timers. > > * feature: added the Lua global variable "__ngx_cycle" which > is a lightuserdata holding the current "ngx_cycle_t" > pointer, which may simplify some FFI-based Lua extensions. > > * doc: added a warning for the "share_all_vars" option for > ngx.location.capture*. > > * upgraded the lua-resty-core library to 0.1.0. > > * bugfix: resty.core.regex: data corruptions might happen when > recursively calling ngx.re.gsub via the user replacement > function argument because of incorrect reusing a globally > shared captures Lua table. thanks James Hurst for the > report. > > * bugfix: ngx.re.gsub: garbage might get injected into gsub > result when "ngx.*" API functions are called inside the user > callback function for the replacement. thanks James Hurst > for the report. > > * feature: resty.core.base: added the "FFI_BUSY" constant for > "NGX_BUSY". > > * upgraded the lua-resty-lrucache library to 0.04. > > * bugfix: resty.lrucache.pureffi: set(): it did not update to > the new value at all if the key had an existing value > (either stale or not). thanks Shuxin Yang for the patch. > > * upgraded the lua-resty-websocket library to 0.05. > > * feature: resty.websocket.client: added support for SSL/TLS > connections (i.e., the "wss://" scheme). thanks Vladislav > Manchev for the patch. > > * doc: mentioned the bitop library dependency when using the > standard Lua 5.1 interpreter (this is not needed for LuaJIT > because it is already built in). thanks Laurent Arnoud for > the patch. > > * upgraded LuaJIT to v2.1-20150120: > https://github.com/openresty/luajit2/tags > > * imported Mike Pall's latest changes: > > * bugfix: don't compile "IR_RETF" after "CALLT" to ff with > side effects. > > * bugfix: fix "BC_UCLO"/"BC_JMP" join optimization in Lua > parser. > > * bugfix: fix corner case in string to number conversion. > > * bugfix: x86: fix argument checks for "ipairs()" > iterator. > > * bugfix: gracefully handle "lua_error()" for a suspended > coroutine. > > * x86/x64: Drop internal x87 math functions. Use libm > functions. > > * x86/x64: Call external symbols directly from interpreter > code. (except for ELF/x86 PIC, where it's easier to use > wrappers.) > > * ARM: Minor interpreter optimization. > > * x86: Minor interpreter optimization. > > * PPC/e500: Drop support for this architecture. > > * MIPS: Fix excess stack growth in interpreter. > > * PPC: Fix excess stack growth in interpreter. > > * ARM: Fix excess stack growth in interpreter. > > * ARM: Fix write barrier check in "BC_USETS". > > * ARM64: Add build infrastructure and initial port of > interpreter. > > * OpenBSD/x86: Better executable memory allocation for > W^X mode. > > * bugfix: the "ngx_http_redis" module failed to compile when the > "ngx_gzip" module was disabled. thanks anod221 for the report. > > The HTML version of the change log with lots of helpful hyper-links > can be browsed here: > > http://openresty.org/#ChangeLog1007007 > > OpenResty (aka. ngx_openresty) is a full-fledged web application > server by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party > Nginx > modules and Lua libraries, as well as most of their external > dependencies. See OpenResty's homepage for details: > > http://openresty.org/ > > We have run extensive testing on our Amazon EC2 test cluster and > ensured that all the components (including the Nginx core) play well > together. The latest test report can always be found here: > > http://qa.openresty.org > > And we have always been running the latest OpenResty in CloudFlare's > global CDN network for years. > > Enjoy! > -agentzh > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sincerely, Batuhan G?ksu -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 5 08:57:13 2015 From: nginx-forum at nginx.us (abhi2528) Date: Thu, 05 Feb 2015 03:57:13 -0500 Subject: Nginx as a proxy with Blocking operations Message-ID: <3d9293e75f36d67c4717f92ff36c0839.NginxMailingListEnglish@forum.nginx.org> Hi All, We have an existing TCP/TLS based server application 'A' in production. Around 10K users can connect to this application. We now have a requirement as follows: 1). Intercept the traffic between Client and A 2). 'Inspect' the packet for a certain logic 3). If the packet matches, call a processing logic (THIS IS A BLOCKING OPERATION AS UNFORTUNATELY THE PROCESSING REQUIRES SOME SORT OF HUMAN INTERVENTION AND MAY TAKE BETWEEN 30 - 60 Seconds). The frequency for match might be just 5-10% of traffic 4). Based on the result of processing either send the packet as is to 'A' or modify the packet content and then send to 'A'. I understand that this is most definitely an ideal scenario as the blocking operation is involved, but the requirements are pretty stringent. I just wanted to understand if Nginx can help me in this context. Does Nginx support such blocking operations? Basically the idea is that if one request matches for the Blocking Operation processing, the other parallel/concurrent requests should not be BLOCKED(or wait). In layman terms a scenario where every request has an independent thread and processing. Can anyone suggest a solution for this problem. Again I acknowledge that this might not be the best way forward but somehow we are constrained. Looking forward to some great advice. Many Thanks, Abhi Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256463,256463#msg-256463 From root at xtremenitro.org Thu Feb 5 11:31:14 2015 From: root at xtremenitro.org (NitrouZ) Date: Thu, 5 Feb 2015 18:31:14 +0700 Subject: Nginx as a proxy with Blocking operations In-Reply-To: <3d9293e75f36d67c4717f92ff36c0839.NginxMailingListEnglish@forum.nginx.org> References: <3d9293e75f36d67c4717f92ff36c0839.NginxMailingListEnglish@forum.nginx.org> Message-ID: I think you need IPS/IDS in front of your nginx server :) Nginx can't capture packets, please read OSI layer. CMIIW On Thursday, February 5, 2015, abhi2528 wrote: > Hi All, > > We have an existing TCP/TLS based server application 'A' in production. > Around 10K users can connect to this application. We now have a requirement > as follows: > > 1). Intercept the traffic between Client and A > 2). 'Inspect' the packet for a certain logic > 3). If the packet matches, call a processing logic (THIS IS A BLOCKING > OPERATION AS UNFORTUNATELY THE PROCESSING REQUIRES SOME SORT OF HUMAN > INTERVENTION AND MAY TAKE BETWEEN 30 - 60 Seconds). The frequency for match > might be just 5-10% of traffic > 4). Based on the result of processing either send the packet as is to 'A' > or > modify the packet content and then send to 'A'. > > I understand that this is most definitely an ideal scenario as the blocking > operation is involved, but the requirements are pretty stringent. > > I just wanted to understand if Nginx can help me in this context. Does > Nginx > support such blocking operations? Basically the idea is that if one request > matches for the Blocking Operation processing, the other > parallel/concurrent > requests should not be BLOCKED(or wait). In layman terms a scenario where > every request has an independent thread and processing. > > Can anyone suggest a solution for this problem. > > Again I acknowledge that this might not be the best way forward but somehow > we are constrained. > > Looking forward to some great advice. > > Many Thanks, > Abhi > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256463,256463#msg-256463 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sent from iDewangga Device -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 5 12:25:54 2015 From: nginx-forum at nginx.us (abhi2528) Date: Thu, 05 Feb 2015 07:25:54 -0500 Subject: Nginx as a proxy with Blocking operations In-Reply-To: References: Message-ID: <4ca03ae0ea8179714ade31e34c2ae51b.NginxMailingListEnglish@forum.nginx.org> Well, I intend to use Nginx as a proxy server. In doing so all my traffic will flow through the proxy. I then intend to write a module to do what I intend as the data will then inherently flow via my module. I did a small proof of concept using HaProxy (which is also not an IPS/IDS) and was able to achieve what I intend, but unfortunately couldnt go ahead with it as it does not support blocking IO. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256463,256467#msg-256467 From igor at sysoev.ru Thu Feb 5 12:38:22 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Thu, 5 Feb 2015 15:38:22 +0300 Subject: Multiple proxy_pass destinations from a single location block In-Reply-To: References: <432B3FA5-F05C-43A1-ADE9-418DE9A2116F@sysoev.ru> Message-ID: On 05 Feb 2015, at 11:26, justink101 wrote: > Thanks Igor. > > What if one of the servers listed in the upstream block should be over https > and the other over http? How is this done using > > upstream proxies { > server foo.mydomain.io; > server bar.mydomain.com; > } > > proxy_pass https://proxies/api/; > > Notice the proxy pass defines only a single scheme (https). No easy way. -- Igor Sysoev http://nginx.com From kyprizel at gmail.com Thu Feb 5 13:25:02 2015 From: kyprizel at gmail.com (kyprizel) Date: Thu, 5 Feb 2015 16:25:02 +0300 Subject: Slow downloads over SSL In-Reply-To: References: Message-ID: Make a pcap, check packet loss/mtu/window size. On Wed, Feb 4, 2015 at 8:54 PM, B.R. wrote: > Nothing in the configuration part you provided rings any bell to me on why > this is going on. > I suggest you take a deeper look at the server level, see if there is not > something that might have an impact there. > > Also, the usual recommended process to seek for the source of the trouble > is to find what triggers is either by: > - Starting from the minimal configuration serving your files over both > protocols and, provided the problem disappeared, progressively add > directives again until it triggers > - Starting from your current configuration, progressively remove tweaking > directives until you reach the minimal configuration or the problem > disappears > > If you got a minimal working example still affected with that problem, I > suggest you provide us with it (after having anonymized what appears > sensitive to you). If it can be reproducted, then it might be something we > missed or a bug. > > Happy digging! :o) > --- > *B. R.* > > On Wed, Feb 4, 2015 at 2:23 PM, rafaelr wrote: > >> B.R. >> >> They are serving exactly the same resources at the same time... My vhost >> points to the same folders for each domain. Files are accessible over HTTP >> and HTTPS. The slow down comes when downloading (the same resource) from >> HTTPS. For example: >> >> http://webmail.domain.tld/test.zip (30MB file can be downloaded at >> 4-5MB/s) >> https://webmail.domain.tld/test.zip (30MB file can be downloaded at >> 300-700kb/s) >> >> I enabled SPDY to see if that would've made a difference but it didn't. I >> do >> have SPDY currently enabled. >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,256386,256440#msg-256440 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at mostertman.org Thu Feb 5 15:29:14 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Thu, 05 Feb 2015 16:29:14 +0100 Subject: Debian repository Package list mismatch? Message-ID: <54D38C4A.5020509@mostertman.org> Hi! We're using Debian 7.x (Wheezy), and because I rather have the latest version of nginx than the stock or backports version, I decided to add your repository. I grabbed the lines from http://nginx.org/en/linux_packages.html and added them to /etc/apt/sources.list.d/nginx.list as follows: deb http://nginx.org/packages/debian/ wheezy nginx deb-src http://nginx.org/packages/debian/ wheezy nginx This works fine after adding the key, however, after running apt-get update, it displays: Package: nginx Version: 1.6.2-1~wheezy Architecture: amd64 Maintainer: Sergey Budnevitch Even though the pool of the repository only contains 1.7.9: http://nginx.org/packages/mainline/debian/pool/nginx/n/nginx/ And, when I look in the Packages list obtained from your repo, I see the following: root at webtv4:/var/lib/apt/lists# grep -A1 'Package:' nginx* nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: nginx nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Version: 1.6.2-1~wheezy -- nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: nginx-debug nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Source: nginx -- nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: nginx-nr-agent nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Version: 2.0.0-4 -- nginx.org_packages_debian_dists_wheezy_nginx_source_Sources:Package: nginx nginx.org_packages_debian_dists_wheezy_nginx_source_Sources-Format: 3.0 (quilt) root at webtv4:/var/lib/apt/lists# Could you please update the package listing of the Debian repository so that we may enjoy the latest version with all of its wonderful features? :) Kind regards, Dani?l Mostertman From daniel at mostertman.org Thu Feb 5 15:34:43 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Thu, 05 Feb 2015 16:34:43 +0100 Subject: Debian repository Package list mismatch? In-Reply-To: <54D38C4A.5020509@mostertman.org> References: <54D38C4A.5020509@mostertman.org> Message-ID: <54D38D93.2040400@mostertman.org> Hello again, With shame on my face I have to admit that I was looking at http://nginx.org/packages/mainline/debian/ and used the stable one instead in the sources.list. My sincerest apologies for bothering you all :) Dani?l Dani?l Mostertman schreef op 5-2-2015 om 16:29: > Hi! > > We're using Debian 7.x (Wheezy), and because I rather have the latest > version of nginx than the stock or backports version, I decided to add > your repository. > I grabbed the lines from http://nginx.org/en/linux_packages.html and > added them to /etc/apt/sources.list.d/nginx.list as follows: > > deb http://nginx.org/packages/debian/ wheezy nginx > deb-src http://nginx.org/packages/debian/ wheezy nginx > > > This works fine after adding the key, however, after running apt-get > update, it displays: > > Package: nginx > Version: 1.6.2-1~wheezy > Architecture: amd64 > Maintainer: Sergey Budnevitch > > Even though the pool of the repository only contains 1.7.9: > > http://nginx.org/packages/mainline/debian/pool/nginx/n/nginx/ > > And, when I look in the Packages list obtained from your repo, I see > the following: > > root at webtv4:/var/lib/apt/lists# grep -A1 'Package:' nginx* > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: > nginx > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Version: > 1.6.2-1~wheezy > -- > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: > nginx-debug > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Source: > nginx > -- > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: > nginx-nr-agent > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Version: > 2.0.0-4 > -- > nginx.org_packages_debian_dists_wheezy_nginx_source_Sources:Package: > nginx > nginx.org_packages_debian_dists_wheezy_nginx_source_Sources-Format: > 3.0 (quilt) > root at webtv4:/var/lib/apt/lists# > > Could you please update the package listing of the Debian repository > so that we may enjoy the latest version with all of its wonderful > features? :) > > Kind regards, > > Dani?l Mostertman From patrick at nommensen.us Thu Feb 5 15:35:29 2015 From: patrick at nommensen.us (Patrick Nommensen) Date: Thu, 5 Feb 2015 07:35:29 -0800 Subject: Debian repository Package list mismatch? In-Reply-To: <54D38C4A.5020509@mostertman.org> References: <54D38C4A.5020509@mostertman.org> Message-ID: <5A399AED-B159-43DF-822D-7C6C53EBB992@nommensen.us> Hello, You should use the following in /etc/apt/sources.list if you wish to use the latest (mainline) version of nginx. deb http://nginx.org/packages/mainline/debian/ codename nginx deb-src http://nginx.org/packages/mainline/debian/ codename nginx http://nginx.org/en/linux_packages.html#mainline Patrick Nommensen @pnommensen > On Feb 5, 2015, at 7:29 AM, Dani?l Mostertman wrote: > > Hi! > > We're using Debian 7.x (Wheezy), and because I rather have the latest version of nginx than the stock or backports version, I decided to add your repository. > I grabbed the lines from http://nginx.org/en/linux_packages.html and added them to /etc/apt/sources.list.d/nginx.list as follows: > > deb http://nginx.org/packages/debian/ wheezy nginx > deb-src http://nginx.org/packages/debian/ wheezy nginx > > > This works fine after adding the key, however, after running apt-get update, it displays: > > Package: nginx > Version: 1.6.2-1~wheezy > Architecture: amd64 > Maintainer: Sergey Budnevitch > > Even though the pool of the repository only contains 1.7.9: > > http://nginx.org/packages/mainline/debian/pool/nginx/n/nginx/ > > And, when I look in the Packages list obtained from your repo, I see the following: > > root at webtv4:/var/lib/apt/lists# grep -A1 'Package:' nginx* > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: nginx > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Version: 1.6.2-1~wheezy > -- > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: nginx-debug > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Source: nginx > -- > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages:Package: nginx-nr-agent > nginx.org_packages_debian_dists_wheezy_nginx_binary-amd64_Packages-Version: 2.0.0-4 > -- > nginx.org_packages_debian_dists_wheezy_nginx_source_Sources:Package: nginx > nginx.org_packages_debian_dists_wheezy_nginx_source_Sources-Format: 3.0 (quilt) > root at webtv4:/var/lib/apt/lists# > > Could you please update the package listing of the Debian repository so that we may enjoy the latest version with all of its wonderful features? :) > > Kind regards, > > Dani?l Mostertman > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Feb 5 17:16:27 2015 From: nginx-forum at nginx.us (ntamblyn) Date: Thu, 05 Feb 2015 12:16:27 -0500 Subject: Nginx Rewrite Message-ID: <6460443686112c86d4d1dcf4f4fd6f1c.NginxMailingListEnglish@forum.nginx.org> How can i turn this into a rewrite i have include what i have tried for example you have www.example.com/test/c_index.shtml you click a the change language button and the url changes to www.example/com/test/c_index.shtml?change_lang so i would like to rewrite that to www.example.com/test/e_index.shtml I have tried rewrite /test/c_(.*)\?change_lang /test/e_$1 redirect; but that doesnt work Any help would be much appreciated Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256479,256479#msg-256479 From reallfqq-nginx at yahoo.fr Thu Feb 5 18:12:13 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 5 Feb 2015 19:12:13 +0100 Subject: Nginx Rewrite In-Reply-To: <6460443686112c86d4d1dcf4f4fd6f1c.NginxMailingListEnglish@forum.nginx.org> References: <6460443686112c86d4d1dcf4f4fd6f1c.NginxMailingListEnglish@forum.nginx.org> Message-ID: Why did you escape the question mark? --- *B. R.* On Thu, Feb 5, 2015 at 6:16 PM, ntamblyn wrote: > How can i turn this into a rewrite i have include what i have tried > > for example you have www.example.com/test/c_index.shtml > > you click a the change language button and the url changes to > www.example/com/test/c_index.shtml?change_lang > > so i would like to rewrite that to www.example.com/test/e_index.shtml > > I have tried > > rewrite /test/c_(.*)\?change_lang /test/e_$1 redirect; > > but that doesnt work > > Any help would be much appreciated > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256479,256479#msg-256479 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Feb 5 18:16:15 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Feb 2015 18:16:15 +0000 Subject: Nginx Rewrite In-Reply-To: <6460443686112c86d4d1dcf4f4fd6f1c.NginxMailingListEnglish@forum.nginx.org> References: <6460443686112c86d4d1dcf4f4fd6f1c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150205181615.GA13461@daoine.org> On Thu, Feb 05, 2015 at 12:16:27PM -0500, ntamblyn wrote: Hi there, > How can i turn this into a rewrite i have include what i have tried "rewrite" matches a request uri, the same as "location" does. This does not include the query string. > I have tried > > rewrite /test/c_(.*)\?change_lang /test/e_$1 redirect; > > but that doesnt work If "rewrite" is the right tool, a way to use it here is if ($args = "change_lang") { rewrite ^/test/c_(.*) /test/e_$1? redirect; } f -- Francis Daly francis at daoine.org From nginxuser at abv.bg Thu Feb 5 20:28:33 2015 From: nginxuser at abv.bg (peter petrov) Date: Thu, 5 Feb 2015 22:28:33 +0200 (EET) Subject: nginx does't serve compressed static .html files. is it a bug? In-Reply-To: <20150204225320.GF3125@daoine.org> References: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> <20150204225320.GF3125@daoine.org> Message-ID: <1309789760.133155.1423168113267.JavaMail.apache@nm3.abv.bg> -------- ?????????? ????? -------- ??: Francis Daly francis at daoine.org ???????: Re: nginx dows't serve compressed static .html files. is it a bug? ??: nginx at nginx.org ????????? ??: 05.02.2015 00:53 On Wed, Feb 04, 2015 at 11:36:04PM +0200, peter petrov wrote: > In my previous post I explained what tried and didn't manage to it. I > attached nginx.conf file. I'll > be very grateful is somebody show me a successful way to do this. http://nginx.org/r/gzip_static gzip_static on; curl -i --compressed http://localhost/file.html will get file.html.gz if it exists, with "Content-Encoding: gzip"; or file.html if that exists, or 404. Check the "bytes sent" field in your access_log to see which was sent. Omit the "--compressed" in the curl command to see the difference. -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx hi again , Thanks for your help Francis Daly I did what you suggested but it doesn't work. Access.log says both times with --compressed or without it "200 162" for the nginx welcome screen.It is very weird. -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Thu Feb 5 20:38:01 2015 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 5 Feb 2015 12:38:01 -0800 Subject: [ANN] OpenResty 1.7.7.2 released In-Reply-To: References: Message-ID: Hello! On Thu, Feb 5, 2015 at 12:47 AM, Batuhan G?ksu wrote: > There are many great new features. > Why "lua" has not been updated ???? > By default, OpenResty uses LuaJIT, which is actively updated upon almost every new OpenResty release. The bundled standard Lua interpreter is only used when you explicitly disable LuaJIT by configuring with the --with-lua51 option (default off). And yeah, we can only use the standard Lua interpreter in its official 5.1 release series because Lua 5.2 and 5.3 are incompatible languages than 5.1 and have incompatible ABI and API as well. See the following comment for more details (it was about Lua 5.2 but also applied equally well to Lua 5.3 and beyond): https://github.com/openresty/lua-nginx-module/issues/343#issuecomment-36442169 FWIW, LuaJIT implements the Lua 5.1 language as well as some *compatible* Lua 5.2 features which can be enabled by the user by specifying the --with-luajit-xcflags='-DLUAJIT_ENABLE_LUA52COMPAT' ./configure option while building OpenResty. Regards, -agentzh From francis at daoine.org Thu Feb 5 21:14:44 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 5 Feb 2015 21:14:44 +0000 Subject: nginx does't serve compressed static .html files. is it a bug? In-Reply-To: <1309789760.133155.1423168113267.JavaMail.apache@nm3.abv.bg> References: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> <20150204225320.GF3125@daoine.org> <1309789760.133155.1423168113267.JavaMail.apache@nm3.abv.bg> Message-ID: <20150205211444.GB13461@daoine.org> On Thu, Feb 05, 2015 at 10:28:33PM +0200, peter petrov wrote: Hi there, > Francis Daly I did what you suggested but it doesn't work. Access.log says both times with --compressed or without it "200 162" for the nginx welcome screen.It is very weird. It works for me: # ls -l html/index.html* -rw-r--r-- 1 root root 612 Jul 23 2013 html/index.html -rw-r--r-- 1 root root 392 Feb 5 21:06 html/index.html.gz # cat conf/nginx.conf events {} http { gzip_static on; server { listen 8080; } } # curl -i http://localhost:8080/ [http 200, I see the content] # curl -i --compressed http://localhost:8080/ [http 200, I see the content] # tail -n 2 logs/access.log 127.0.0.1 - - [05/Feb/2015:21:08:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.15.5 (i386-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" 127.0.0.1 - - [05/Feb/2015:21:08:57 +0000] "GET / HTTP/1.1" 200 392 "-" "curl/7.15.5 (i386-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" Repeat the test with exactly this configuration to see if it fails for you. Or show exactly what you are doing so someone can see if it work for them. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 6 18:49:22 2015 From: nginx-forum at nginx.us (ericr) Date: Fri, 06 Feb 2015 13:49:22 -0500 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: Message-ID: We've been unable to reproduce it with any one browser or IP address. It really is very intermittent. Fortunately, I believe we've gotten to the bottom of this. It looks like our data center switched us over to anti-DDoS route. This means all of our traffic has been passing through hardware that performs heavy packet filtering. The packet loss was causing a lot of confusion for both server and clients. The TLS version fallback that some browsers do upon an unsuccessful handshake made it all the more confusing, since these errors get logged as SSL errors in nginx logs. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,256492#msg-256492 From nginxuser at abv.bg Fri Feb 6 21:22:22 2015 From: nginxuser at abv.bg (peter petrov) Date: Fri, 6 Feb 2015 23:22:22 +0200 (EET) Subject: nginx does't serve compressed static .html files. is it a bug? In-Reply-To: <20150205211444.GB13461@daoine.org> References: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> <20150204225320.GF3125@daoine.org> <1309789760.133155.1423168113267.JavaMail.apache@nm3.abv.bg> <20150205211444.GB13461@daoine.org> Message-ID: <1894463301.236874.1423257742105.JavaMail.apache@nm2.abv.bg> -------- ?????????? ????? -------- ??: Francis Daly francis at daoine.org ???????: Re: nginx does't serve compressed static .html files. is it a bug? ??: nginx at nginx.org ????????? ??: 05.02.2015 23:14 On Thu, Feb 05, 2015 at 10:28:33PM +0200, peter petrov wrote: Hi there, > Francis Daly I did what you suggested but it doesn't work. Access.log says both times with --compressed or without it "200 162" for the nginx welcome screen.It is very weird. It works for me: # ls -l html/index.html* -rw-r--r-- 1 root root 612 Jul 23 2013 html/index.html -rw-r--r-- 1 root root 392 Feb 5 21:06 html/index.html.gz # cat conf/nginx.conf events {} http { gzip_static on; server { listen 8080; } } # curl -i http://localhost:8080/ [http 200, I see the content] # curl -i --compressed http://localhost:8080/ [http 200, I see the content] # tail -n 2 logs/access.log 127.0.0.1 - - [05/Feb/2015:21:08:51 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.15.5 (i386-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" 127.0.0.1 - - [05/Feb/2015:21:08:57 +0000] "GET / HTTP/1.1" 200 392 "-" "curl/7.15.5 (i386-redhat-linux-gnu) libcurl/7.15.5 OpenSSL/0.9.8b zlib/1.2.3 libidn/0.6.5" Repeat the test with exactly this configuration to see if it fails for you. Or show exactly what you are doing so someone can see if it work for them. Good luck with it, Francis Daly francis at daoine.org Hi , Thank you for your efforts and patience. Things are getting better here and I almost achieve your result Nginx serves static files now., but writes smth above "Welcome to nginx!" index.html 00006440000... some numbers and finishes with Oustar rootroot. 1. sudo tar czvf index.html.gz index.html 2.sudo curl -i --compressed http://localhost:8080/ or sudo curl -i http://localhost:8080/ they both produce the same result index.html000064400000..... 3. sudo tail -n 2 logs/access.log makes the same result "200 483" 480B was the best I could achieve using both tar and gzip -9 how did manage to get 392B out of 612B? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Feb 6 22:53:13 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 6 Feb 2015 22:53:13 +0000 Subject: nginx does't serve compressed static .html files. is it a bug? In-Reply-To: <1894463301.236874.1423257742105.JavaMail.apache@nm2.abv.bg> References: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> <20150204225320.GF3125@daoine.org> <1309789760.133155.1423168113267.JavaMail.apache@nm3.abv.bg> <20150205211444.GB13461@daoine.org> <1894463301.236874.1423257742105.JavaMail.apache@nm2.abv.bg> Message-ID: <20150206225313.GC13461@daoine.org> On Fri, Feb 06, 2015 at 11:22:22PM +0200, peter petrov wrote: Hi there, > # ls -l html/index.html* > -rw-r--r-- 1 root root 612 Jul 23 2013 html/index.html > -rw-r--r-- 1 root root 392 Feb 5 21:06 html/index.html.gz > 1. sudo tar czvf index.html.gz index.html Don't use tar. Just use gzip. gzip index.html or gzip -c index.html > index.html.gz f -- Francis Daly francis at daoine.org From luky-37 at hotmail.com Fri Feb 6 23:30:18 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sat, 7 Feb 2015 00:30:18 +0100 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: , Message-ID: > We've been unable to reproduce it with any one browser or IP address. It > really is very intermittent. Fortunately, I believe we've gotten to the > bottom of this. It looks like our data center switched us over to anti-DDoS > route. This means all of our traffic has been passing through hardware that > performs heavy packet filtering. The packet loss was causing a lot of > confusion for both server and clients. The TLS version fallback that some > browsers do upon an unsuccessful handshake made it all the more confusing, > since these errors get logged as SSL errors in nginx logs. So a MITM security device basically did a TLS downgrade attack here, which the new fallback extension successfully prevented. Thats a good thing, it means it works. From nginxuser at abv.bg Sat Feb 7 19:56:57 2015 From: nginxuser at abv.bg (peter petrov) Date: Sat, 7 Feb 2015 21:56:57 +0200 (EET) Subject: nginx does't serve compressed static .html files. is it a bug? In-Reply-To: <20150206225313.GC13461@daoine.org> References: <1582832677.54520.1423085764345.JavaMail.apache@nm3.abv.bg> <20150204225320.GF3125@daoine.org> <1309789760.133155.1423168113267.JavaMail.apache@nm3.abv.bg> <20150205211444.GB13461@daoine.org> <1894463301.236874.1423257742105.JavaMail.apache@nm2.abv.bg> <20150206225313.GC13461@daoine.org> Message-ID: <169709280.252732.1423339017714.JavaMail.apache@nm3.abv.bg> -------- ?????????? ????? -------- ??: Francis Daly francis at daoine.org ???????: Re: nginx does't serve compressed static .html files. is it a bug? ??: nginx at nginx.org ????????? ??: 07.02.2015 00:53 On Fri, Feb 06, 2015 at 11:22:22PM +0200, peter petrov wrote: Hi there, > # ls -l html/index.html* > -rw-r--r-- 1 root root 612 Jul 23 2013 html/index.html > -rw-r--r-- 1 root root 392 Feb 5 21:06 html/index.html.gz > 1. sudo tar czvf index.html.gz index.html Don't use tar. Just use gzip. gzip index.html or gzip -c index.html > index.html.gz f -- Francis Daly francis at daoine.org Hi, Everything is working perfectly now. You can't imagine how grateful am I. _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Feb 7 20:23:44 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 07 Feb 2015 15:23:44 -0500 Subject: nginx does't serve compressed static .html files. is it a bug? In-Reply-To: <169709280.252732.1423339017714.JavaMail.apache@nm3.abv.bg> References: <169709280.252732.1423339017714.JavaMail.apache@nm3.abv.bg> Message-ID: peter petrov Wrote: ------------------------------------------------------- > Everything is working perfectly now. You can't imagine how grateful am http://nginx.org/en/donation.html :-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256453,256505#msg-256505 From reallfqq-nginx at yahoo.fr Sat Feb 7 23:39:54 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 8 Feb 2015 00:39:54 +0100 Subject: G-WAN assumptions: Part of truth/lies? Message-ID: Hello, Documentating myself on proper benchmarking, I ran into the following page: http://gwan.com/en_apachebench_httperf.html Their conclusion is that their product is the best of all. Well, 'of course' one might say... ;o) What surprised me most that they claim to use less resources AND perform better. That particularly strikes me because usually ot favor one side, you take blows on the other one. To me, the problem of such tests is that they are a mix of realistic/unrealistic behaviors, the first being invoked to justify useful conclusions, the latter to make a specific environment so that features from the Web server (as opposed to other components of the infrastructure) are tested. They are arrogant enough to claim theirs is bigger and paranoid enough to call almost every other benchmark biased or coming from haste/FUD campaigns. That is only OK if they are as pure as the driven snow... I need expert eyes of yours to determine to which end those claims are grounded. Particular points: - Is their nginx configuration suitable for valid benchmark results? - Why is your wrk test tool built in such way in pre-establishes TCP? - Why is nginx pre-allocating resources so its memory footprint is large when connections are pre-established? I thought nginx event-based system was allocating resources on-the-fly, as G-WAN seems to be doing it. (cf. 'The (long) story of Nginx's "wrk"' section) - Why is wrk (in G-WAN's opinion) 'too slow under 10,000 simultaneous connections'? (cf. 'The (long) story of Nginx's "wrk"' section) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sat Feb 7 23:52:27 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 8 Feb 2015 00:52:27 +0100 Subject: G-WAN assumptions: Part of truth/lies? In-Reply-To: References: Message-ID: One forgotten specific point I also wanted a reply upon: - Why is nginx' latency that high (200ms) while serving the built-in nop.gif content? (cf. 'The (long) story of Nginx's "wrk"' section) --- *B. R.* On Sun, Feb 8, 2015 at 12:39 AM, B.R. wrote: > Hello, > > Documentating myself on proper benchmarking, I ran into the following page: > http://gwan.com/en_apachebench_httperf.html > > Their conclusion is that their product is the best of all. Well, 'of > course' one might say... ;o) > > What surprised me most that they claim to use less resources AND perform > better. That particularly strikes me because usually ot favor one side, you > take blows on the other one. > > To me, the problem of such tests is that they are a mix of > realistic/unrealistic behaviors, the first being invoked to justify useful > conclusions, the latter to make a specific environment so that features > from the Web server (as opposed to other components of the infrastructure) > are tested. > > They are arrogant enough to claim theirs is bigger and paranoid enough to > call almost every other benchmark biased or coming from haste/FUD > campaigns. That is only OK if they are as pure as the driven snow... > > I need expert eyes of yours to determine to which end those claims are > grounded. > > Particular points: > - Is their nginx configuration > suitable for valid benchmark results? > - Why is your wrk test tool built in such way in pre-establishes TCP? > - Why is nginx pre-allocating resources so its memory footprint is large > when connections are pre-established? I thought nginx event-based system > was allocating resources on-the-fly, as G-WAN seems to be doing it. (cf. > 'The (long) story of Nginx's "wrk"' section) > - Why is wrk (in G-WAN's opinion) 'too slow under 10,000 simultaneous > connections'? (cf. 'The (long) story of Nginx's "wrk"' section) > --- > *B. R.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 8 01:08:12 2015 From: nginx-forum at nginx.us (deltaxfx) Date: Sat, 07 Feb 2015 20:08:12 -0500 Subject: Receiving 2 strict-transport-security headers with different times Message-ID: I have a domain setup with SSL and I am trying to get HSTS headers working. I have done this in NGINX before with no problem. On this new domain I can't seem to get HSTS working properly. Not sure what I am doing wrong. I have the following in the server block for the SSL server: add_header Strict-Transport-Security "max-age=31536000;"; When I run "curl -s -D- https://my.domain.net/ | grep Strict" I receive the following: Strict-Transport-Security: max-age=0 Strict-Transport-Security: max-age=31536000; >From all the reading I've done trying to figure this out, my impression is that with the add_header in the server directive, that will override any previous declaration (there are none). Is that correct? I grep'ed my entire /etc directory and there is only one instance of "max-age" and that is in my ssl server config, with one year (31536000 seconds). So no where on this system, which was just built, and only accessed by me, is there any reference to HSTS with max-age=0. There is only one config in sites-enabled, and that is for my.domain.net. There is a port 80 config with a return 301 statement to permanently redirect to the SSL server config. My nginx version is 1.6.2, on Ubuntu 14.04 LTS. I have been unable to find any help on the web for where the invalid (max-age=0) could be coming from. When testing on ssllabs they report the max-age=0 header. When running the curl statement above on my local network I show the above output. I'm not sure where to go from here trying to figure this out. There is nothing in the NGINX error log, I wouldn't expect anything as NGINX restarts with no issues. Thanks for reading! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256508,256508#msg-256508 From reallfqq-nginx at yahoo.fr Sun Feb 8 01:51:30 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 8 Feb 2015 02:51:30 +0100 Subject: Benchmarking with wrk Message-ID: Hello, I am starting to play with some benchmarking tools. Following Konstantin advice (video from the last user conference ;o) ), I am avoiding ab. After a few runs, I notice I get some 'Socket errors' of diffent types: connect and timeout. How do I get details about them? Nothing seems to pop up in system logs and there seem to be no log file for wrk. Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Sun Feb 8 04:06:20 2015 From: dewanggaba at xtremenitro.org (NitrouZ) Date: Sun, 8 Feb 2015 11:06:20 +0700 Subject: Receiving 2 strict-transport-security headers with different times In-Reply-To: References: Message-ID: I've got same experience with Laravel framework. They have another configuration to set header like that. What web apps framework do you use? On Sunday, February 8, 2015, deltaxfx wrote: > I have a domain setup with SSL and I am trying to get HSTS headers working. > I have done this in NGINX before with no problem. On this new domain I > can't > seem to get HSTS working properly. Not sure what I am doing wrong. > I have the following in the server block for the SSL server: > add_header Strict-Transport-Security "max-age=31536000;"; > > When I run "curl -s -D- https://my.domain.net/ | grep Strict" > I receive the following: > Strict-Transport-Security: max-age=0 > Strict-Transport-Security: max-age=31536000; > > From all the reading I've done trying to figure this out, my impression is > that with the add_header in the server directive, that will override any > previous declaration (there are none). Is that correct? > I grep'ed my entire /etc directory and there is only one instance of > "max-age" and that is in my ssl server config, with one year (31536000 > seconds). So no where on this system, which was just built, and only > accessed by me, is there any reference to HSTS with max-age=0. There is > only > one config in sites-enabled, and that is for my.domain.net. There is a > port > 80 config with a return 301 statement to permanently redirect to the SSL > server config. > > My nginx version is 1.6.2, on Ubuntu 14.04 LTS. > I have been unable to find any help on the web for where the invalid > (max-age=0) could be coming from. When testing on ssllabs they report the > max-age=0 header. When running the curl statement above on my local network > I show the above output. > > I'm not sure where to go from here trying to figure this out. There is > nothing in the NGINX error log, I wouldn't expect anything as NGINX > restarts > with no issues. > > Thanks for reading! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256508,256508#msg-256508 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sent from iDewangga Device -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 8 04:32:13 2015 From: nginx-forum at nginx.us (deltaxfx) Date: Sat, 07 Feb 2015 23:32:13 -0500 Subject: Receiving 2 strict-transport-security headers with different times In-Reply-To: References: Message-ID: <65d8798aa5f6aa580e091d9cd396de81.NginxMailingListEnglish@forum.nginx.org> Very interesting. I am using ownCloud. I thought something like that may be the case and did a couple quick searches that didn't turn up anything, but I'll give it another look now. Thanks for the hint! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256508,256512#msg-256512 From nginx-forum at nginx.us Sun Feb 8 04:42:11 2015 From: nginx-forum at nginx.us (deltaxfx) Date: Sat, 07 Feb 2015 23:42:11 -0500 Subject: [Solved] Receiving 2 strict-transport-security headers with different times In-Reply-To: <65d8798aa5f6aa580e091d9cd396de81.NginxMailingListEnglish@forum.nginx.org> References: <65d8798aa5f6aa580e091d9cd396de81.NginxMailingListEnglish@forum.nginx.org> Message-ID: dewanggaba, your hint was correct. Even though I am using the NGINX config supplied by ownCloud, there was still a setting in the admin panel to force HTTPS, which also sends an HSTS header. But the kicker is, if force HTTPS (in PHP) is set to off (and just forced through the server config), ownCloud sends an HSTS header for max-age=0! This is ownCloud 7.0.4 (stable). Here is the relevant code in case it helps anyone who might be searching for the same thing in the future: public static function checkSSL() { // redirect to https site if configured if (\OC::$server->getSystemConfig()->getValue('forcessl', false)) { // Default HSTS policy $header = 'Strict-Transport-Security: max-age=31536000'; // If SSL for subdomains is enabled add "; includeSubDomains" to the header if(\OC::$server->getSystemConfig()->getValue('forceSSLforSubdomains', false)) { $header .= '; includeSubDomains'; } header($header); ini_set('session.cookie_secure', 'on'); if (OC_Request::serverProtocol() <> 'https' and !OC::$CLI) { $url = 'https://' . OC_Request::serverHost() . OC_Request::requestUri(); header("Location: $url"); exit(); } } else { // Invalidate HSTS headers if (OC_Request::serverProtocol() === 'https') { header('Strict-Transport-Security: max-age=0'); } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256508,256513#msg-256513 From root at xtremenitro.org Sun Feb 8 04:43:47 2015 From: root at xtremenitro.org (NitrouZ) Date: Sun, 8 Feb 2015 11:43:47 +0700 Subject: Receiving 2 strict-transport-security headers with different times In-Reply-To: References: <65d8798aa5f6aa580e091d9cd396de81.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Glad to help. Cheers. On Sunday, February 8, 2015, deltaxfx wrote: > dewanggaba, your hint was correct. Even though I am using the NGINX config > supplied by ownCloud, there was still a setting in the admin panel to force > HTTPS, which also sends an HSTS header. But the kicker is, if force HTTPS > (in PHP) is set to off (and just forced through the server config), > ownCloud > sends an HSTS header for max-age=0! > This is ownCloud 7.0.4 (stable). > Here is the relevant code in case it helps anyone who might be searching > for > the same thing in the future: > > > public static function checkSSL() { > // redirect to https site if configured > if (\OC::$server->getSystemConfig()->getValue('forcessl', > false)) { > // Default HSTS policy > $header = 'Strict-Transport-Security: > max-age=31536000'; > // If SSL for subdomains is enabled add "; > includeSubDomains" to the > header > > if(\OC::$server->getSystemConfig()->getValue('forceSSLforSubdomains', > false)) { > $header .= '; includeSubDomains'; > } > header($header); > ini_set('session.cookie_secure', 'on'); > if (OC_Request::serverProtocol() <> 'https' and > !OC::$CLI) { > $url = 'https://' . > OC_Request::serverHost() . > OC_Request::requestUri(); > header("Location: $url"); > exit(); > } > } else { > // Invalidate HSTS headers > if (OC_Request::serverProtocol() === 'https') { > header('Strict-Transport-Security: > max-age=0'); > } > } > } > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256508,256513#msg-256513 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Sent from iDewangga Device -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 8 22:41:27 2015 From: nginx-forum at nginx.us (ChrisAha) Date: Sun, 08 Feb 2015 17:41:27 -0500 Subject: Limiting parallel requests by source IP address Message-ID: <3933129952778bc06ef69074e3f6a9c8.NginxMailingListEnglish@forum.nginx.org> I am using nginx as a reverse proxy to a Ruby on Rails application using the unicorn server on multiple load-balanced application servers. This configuration allows many HTTP requests to be serviced in parallel. I'll call the total number of parallel requests that can be serviced 'P', which is the same as the number of unicorn processes running on the application servers. I have many users accessing the nginx server and I want to ensure that no single user can consume too much (or all) of the resources. There are existing plugins for this type of thing: limit_conn and limit_req. The problem is that it looks like these plugins are based upon the request rate (i.e. requests per second). This is a less than ideal way to limit resources because the rate at which requests are made does not equate to the amount of load the user is putting on the system. For example, if the requests being made are simple (and quick to service) then it might be OK for a user to make 20 per second. However, if the requests are complex and take a longer time to service then we may not want a user to be able to make more than 1 of these expensive requests per second. So it is impossible to choose a rate that allows many quick requests, but few slow ones. Instead of limiting by rate, it would be better to limit the number of *parallel* requests a user can make. So if the total system can service P parallel requests we would limit any one user to say P/10 requests. So from the perspective of any one user our system appears to have 1/10th of the capacity that it really does. We don't need to limit the capacity to P/number_of_users because in practice most users are inactive at any point in time. We just need to ensure that no matter how many requests, fast or slow, that one user floods the system with, they can't consume all of the resources and so impact other users. Note that I don't want to return a 503 error message to a user who tries to make more than P/10 requests at once. I just want to queue the next request so that it will eventually execute, just more slowly. I can't find any existing plugin for Nginx that does this. Am I missing something? I am planning to write a plugin that will allow me to implement resource limits in this way. But I am curious if anyone can see a hole in this logic, or an alternative way to achieve the same thing. Thanks, Chris. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256517,256517#msg-256517 From steve at greengecko.co.nz Mon Feb 9 00:30:51 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 09 Feb 2015 13:30:51 +1300 Subject: G-WAN assumptions: Part of truth/lies? In-Reply-To: References: Message-ID: <1423441851.14176.175.camel@steve-new> And of course, you should ask what the relevance of delivering a single, 100byte file a huge number of times actually has in real life, or where useful things - like comparison of server side languages behind various web servers - are tested. And yes, any web servers latency should be approximately zero for testing delivery of a static file locally ( also pointless ). returning empty_gif, rather than nop.gif is also exercising internal code, rather than just delivering a file, so the processing is not comparable. saving open file metadata may also help, but the test is so skewed and unrealistic it's not really worth the effort. That's just my view as a SysAdmin On Sun, 2015-02-08 at 00:52 +0100, B.R. wrote: > One forgotten specific point I also wanted a reply upon: > > - Why is nginx' latency that high (200ms) while serving the built-in > nop.gif content? (cf. 'The (long) story of Nginx's "wrk"' section) > > --- > B. R. > > On Sun, Feb 8, 2015 at 12:39 AM, B.R. wrote: > Hello, > > > Documentating myself on proper benchmarking, I ran into the > following page: > http://gwan.com/en_apachebench_httperf.html > > > Their conclusion is that their product is the best of all. > Well, 'of course' one might say... ;o) > > > What surprised me most that they claim to use less resources > AND perform better. That particularly strikes me because > usually ot favor one side, you take blows on the other one. > > > To me, the problem of such tests is that they are a mix of > realistic/unrealistic behaviors, the first being invoked to > justify useful conclusions, the latter to make a specific > environment so that features from the Web server (as opposed > to other components of the infrastructure) are tested. > > > They are arrogant enough to claim theirs is bigger and > paranoid enough to call almost every other benchmark biased or > coming from haste/FUD campaigns. That is only OK if they are > as pure as the driven snow... > > > > I need expert eyes of yours to determine to which end those > claims are grounded. > > > Particular points: > > - Is their nginx configuration suitable for valid benchmark > results? > > - Why is your wrk test tool built in such way in > pre-establishes TCP? > > - Why is nginx pre-allocating resources so its memory > footprint is large when connections are pre-established? I > thought nginx event-based system was allocating resources > on-the-fly, as G-WAN seems to be doing it. (cf. 'The (long) > story of Nginx's "wrk"' section) > - Why is wrk (in G-WAN's opinion) 'too slow under 10,000 > simultaneous connections'? (cf. 'The (long) story of Nginx's > "wrk"' section) > --- > B. R. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Mon Feb 9 06:13:16 2015 From: nginx-forum at nginx.us (pistacio) Date: Mon, 09 Feb 2015 01:13:16 -0500 Subject: Diconnect clients at given time Message-ID: <259105b283a7d1f2b44e32bc70f28d30.NginxMailingListEnglish@forum.nginx.org> Is it possible to diconnect on purpose a connection after x minutes? Sort of "diconnect zoneX 5min" Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256520,256520#msg-256520 From reallfqq-nginx at yahoo.fr Mon Feb 9 19:41:49 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 9 Feb 2015 20:41:49 +0100 Subject: Limiting parallel requests by source IP address In-Reply-To: <3933129952778bc06ef69074e3f6a9c8.NginxMailingListEnglish@forum.nginx.org> References: <3933129952778bc06ef69074e3f6a9c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: First, you need to define 'user', which is not a trivial problem. Unless you use the commercial subscription, it is hard to tie a connection to a session. You can use components in fornt of nginx to identify them (usually with cookies). Thus 'user' in nginx FOSS usually means 'IP address'. Now, you have the limit_conn module, as you noticed, which allows you to limit (as its name suggests) connections, and not rate (which is the job of... the limit_req module). You need to set a key on the zone to the previously told value in order to identify a 'user'. Then the limit_conn directive allows you to set the connections limit. To make the limit number vary, you will need to automatically (periodically) re-generate some included configuration file and reload nginx configuration, as it is evaluated once before requests are processed. You might also use some lua scripting to change it on the fly, I suppose. However, you won't be able to do anything else by rejecting extra connections with a 503 (or another customisable HTTP code). The client will then need to try to connect again following a certain shared protocol after some conditions are met (time?). === Another idea (looks like dirty, but you will decide on that): If you know what time a specific request needs to complete, you can try to play with limit_req to limit connections, using the burst property of this directive to stack waiting (but not rejected) connections. Say you want to limit connections to 5, but accept up to 10 of them, all that in 30s, you can use a limit of 5 associated with a rate of 2/m and a burst of 5. All that requires testing, provided I correctly understood your specifications. My 2 cents, --- *B. R.* On Sun, Feb 8, 2015 at 11:41 PM, ChrisAha wrote: > I am using nginx as a reverse proxy to a Ruby on Rails application using > the > unicorn server on multiple load-balanced application servers. This > configuration allows many HTTP requests to be serviced in parallel. I'll > call the total number of parallel requests that can be serviced 'P', which > is the same as the number of unicorn processes running on the application > servers. > > I have many users accessing the nginx server and I want to ensure that no > single user can consume too much (or all) of the resources. There are > existing plugins for this type of thing: limit_conn and limit_req. The > problem is that it looks like these plugins are based upon the request rate > (i.e. requests per second). This is a less than ideal way to limit > resources > because the rate at which requests are made does not equate to the amount > of > load the user is putting on the system. For example, if the requests being > made are simple (and quick to service) then it might be OK for a user to > make 20 per second. However, if the requests are complex and take a longer > time to service then we may not want a user to be able to make more than 1 > of these expensive requests per second. So it is impossible to choose a > rate > that allows many quick requests, but few slow ones. > > Instead of limiting by rate, it would be better to limit the number of > *parallel* requests a user can make. So if the total system can service P > parallel requests we would limit any one user to say P/10 requests. So from > the perspective of any one user our system appears to have 1/10th of the > capacity that it really does. We don't need to limit the capacity to > P/number_of_users because in practice most users are inactive at any point > in time. We just need to ensure that no matter how many requests, fast or > slow, that one user floods the system with, they can't consume all of the > resources and so impact other users. > > Note that I don't want to return a 503 error message to a user who tries to > make more than P/10 requests at once. I just want to queue the next request > so that it will eventually execute, just more slowly. > > I can't find any existing plugin for Nginx that does this. Am I missing > something? > > I am planning to write a plugin that will allow me to implement resource > limits in this way. But I am curious if anyone can see a hole in this > logic, > or an alternative way to achieve the same thing. > > Thanks, > > Chris. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256517,256517#msg-256517 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 10 03:23:20 2015 From: nginx-forum at nginx.us (ChrisAha) Date: Mon, 09 Feb 2015 22:23:20 -0500 Subject: Limiting parallel requests by source IP address In-Reply-To: References: Message-ID: <51f614cd049b34c8430462b873f60732.NginxMailingListEnglish@forum.nginx.org> Hi, For my purposes IP address is an acceptable definition of a user. In any case I would use the commercial subscription if it would help with this problem. Rate limiting doesn't help because I don't know ahead of time whether a user will make many fast requests, or fewer slow requests - and in fact a user might make a mix of fast and slow requests, so rate limiting wouldn't be effective at limiting the resources being used. Regards, Chris. B.R. Wrote: ------------------------------------------------------- > First, you need to define 'user', which is not a trivial problem. > Unless you use the commercial subscription, it is hard to tie a > connection > to a session. You can use components in fornt of nginx to identify > them > (usually with cookies). > Thus 'user' in nginx FOSS usually means 'IP address'. > > Now, you have the limit_conn > > module, as > you noticed, which allows you to limit (as its name suggests) > connections, > and not rate (which is the job of... the limit_req > > module). > You need to set a key on the zone to the previously told value in > order to > identify a 'user'. Then the limit_conn directive allows you to set the > connections limit. > > To make the limit number vary, you will need to automatically > (periodically) re-generate some included configuration file and reload > nginx configuration, as it is evaluated once before requests are > processed. > You might also use some lua scripting to change it on the fly, I > suppose. > > However, you won't be able to do anything else by rejecting extra > connections with a 503 (or another customisable HTTP code). The client > will > then need to try to connect again following a certain shared protocol > after > some conditions are met (time?). > > === > > Another idea (looks like dirty, but you will decide on that): > If you know what time a specific request needs to complete, you can > try to > play with limit_req to limit connections, using the burst property of > this > directive to stack waiting (but not rejected) connections. > Say you want to limit connections to 5, but accept up to 10 of them, > all > that in 30s, you can use a limit of 5 associated with a rate of 2/m > and a > burst of 5. > > All that requires testing, provided I correctly understood your > specifications. > My 2 cents, > --- > *B. R.* > > On Sun, Feb 8, 2015 at 11:41 PM, ChrisAha > wrote: > > > I am using nginx as a reverse proxy to a Ruby on Rails application > using > > the > > unicorn server on multiple load-balanced application servers. This > > configuration allows many HTTP requests to be serviced in parallel. > I'll > > call the total number of parallel requests that can be serviced 'P', > which > > is the same as the number of unicorn processes running on the > application > > servers. > > > > I have many users accessing the nginx server and I want to ensure > that no > > single user can consume too much (or all) of the resources. There > are > > existing plugins for this type of thing: limit_conn and limit_req. > The > > problem is that it looks like these plugins are based upon the > request rate > > (i.e. requests per second). This is a less than ideal way to limit > > resources > > because the rate at which requests are made does not equate to the > amount > > of > > load the user is putting on the system. For example, if the requests > being > > made are simple (and quick to service) then it might be OK for a > user to > > make 20 per second. However, if the requests are complex and take a > longer > > time to service then we may not want a user to be able to make more > than 1 > > of these expensive requests per second. So it is impossible to > choose a > > rate > > that allows many quick requests, but few slow ones. > > > > Instead of limiting by rate, it would be better to limit the number > of > > *parallel* requests a user can make. So if the total system can > service P > > parallel requests we would limit any one user to say P/10 requests. > So from > > the perspective of any one user our system appears to have 1/10th of > the > > capacity that it really does. We don't need to limit the capacity to > > P/number_of_users because in practice most users are inactive at any > point > > in time. We just need to ensure that no matter how many requests, > fast or > > slow, that one user floods the system with, they can't consume all > of the > > resources and so impact other users. > > > > Note that I don't want to return a 503 error message to a user who > tries to > > make more than P/10 requests at once. I just want to queue the next > request > > so that it will eventually execute, just more slowly. > > > > I can't find any existing plugin for Nginx that does this. Am I > missing > > something? > > > > I am planning to write a plugin that will allow me to implement > resource > > limits in this way. But I am curious if anyone can see a hole in > this > > logic, > > or an alternative way to achieve the same thing. > > > > Thanks, > > > > Chris. > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,256517,256517#msg-256517 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256517,256529#msg-256529 From nginx-forum at nginx.us Tue Feb 10 07:35:03 2015 From: nginx-forum at nginx.us (shmulik) Date: Tue, 10 Feb 2015 02:35:03 -0500 Subject: catching 408 response from upstream server Message-ID: Hi, I'm using nginx as reversed proxy, using the proxy module. Whenever i get a response code >= 400 from the upstream server, i'm redirecting the client to a different url. This is the configuration i use (simplified): location ~ "^/proxy/host_(.*)/fallback_(.*)" { proxy_pass http://$1; proxy_intercept_errors on; error_page 404 408 500 http://$2; } However, i've noticed that when the upstream server responds with 408 response code, nginx does not send a redirect. Instead it terminates the connection to the client. Is this the intended behavior? Is there any way around it? I'd like to be able to intercept 408 responses as well and redirect them. Any alternative suggestions to implement it are very welcome. Thanks in advance, Shmulikb Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256530,256530#msg-256530 From sb at nginx.com Tue Feb 10 08:21:13 2015 From: sb at nginx.com (Sergey Budnevitch) Date: Tue, 10 Feb 2015 11:21:13 +0300 Subject: Benchmarking with wrk In-Reply-To: References: Message-ID: <4088D83F-8027-4115-971C-45EC178EE655@nginx.com> > On 08 Feb 2015, at 04:51, B.R. wrote: > > Hello, > > I am starting to play with some benchmarking tools. > Following Konstantin advice (video from the last user conference ;o) ), I am avoiding ab. > > After a few runs, I notice I get some 'Socket errors' of diffent types: connect and timeout. > How do I get details about them? Nothing seems to pop up in system logs and there seem to be no log file for wrk. timeouts are not errors in wrk, it only means that response wasn't received in predefined time. By default it is 2 seconds, you may increase it with ?timeout option. As for connect errors, check listen queue length and overflows in netstat -s: netstat -s | grep -i listen From reallfqq-nginx at yahoo.fr Tue Feb 10 08:57:28 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 10 Feb 2015 09:57:28 +0100 Subject: catching 408 response from upstream server In-Reply-To: References: Message-ID: I do not see anything in error_page documentation stating it would not send anything for 408. I suggest a problem on your side. Do you have HTTP Requests/Responses traces to show? Any sign of trouble (configuration, not applied, wrong location, empty arguments, etc.)? --- *B. R.* On Tue, Feb 10, 2015 at 8:35 AM, shmulik wrote: > Hi, > I'm using nginx as reversed proxy, using the proxy module. > Whenever i get a response code >= 400 from the upstream server, i'm > redirecting the client to a different url. > > This is the configuration i use (simplified): > > location ~ "^/proxy/host_(.*)/fallback_(.*)" { > > proxy_pass http://$1; > > proxy_intercept_errors on; > error_page 404 408 500 http://$2; > } > > However, i've noticed that when the upstream server responds with 408 > response code, nginx does not send a redirect. > Instead it terminates the connection to the client. > > Is this the intended behavior? > Is there any way around it? > I'd like to be able to intercept 408 responses as well and redirect them. > Any alternative suggestions to implement it are very welcome. > > Thanks in advance, > Shmulikb > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256530,256530#msg-256530 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From igor at sysoev.ru Tue Feb 10 09:28:06 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Tue, 10 Feb 2015 12:28:06 +0300 Subject: G-WAN assumptions: Part of truth/lies? In-Reply-To: References: Message-ID: <8D514F6F-8ABE-4005-8EB5-D267CADC9273@sysoev.ru> On 08 Feb 2015, at 02:39, B.R. wrote: > - Is their nginx configuration suitable for valid benchmark results? Probably you should disable "accept_mutex off": http://nginx.org/en/docs/ngx_core_module.html#accept_mutex Because the mutex in combination with "multi_accept on?: http://nginx.org/en/docs/ngx_core_module.html#multi_accept routes almost all requests just to one worker in this 100-bytes micro-benchmark. -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From frederik.nosi at postecom.it Tue Feb 10 12:19:13 2015 From: frederik.nosi at postecom.it (Frederik Nosi) Date: Tue, 10 Feb 2015 13:19:13 +0100 Subject: catching 408 response from upstream server In-Reply-To: References: Message-ID: <54D9F741.3060107@postecom.it> Hi, On 02/10/2015 08:35 AM, shmulik wrote: > Hi, > I'm using nginx as reversed proxy, using the proxy module. > Whenever i get a response code >= 400 from the upstream server, i'm > redirecting the client to a different url. > > This is the configuration i use (simplified): > > location ~ "^/proxy/host_(.*)/fallback_(.*)" { > > proxy_pass http://$1; > > proxy_intercept_errors on; > error_page 404 408 500 http://$2; > } > > However, i've noticed that when the upstream server responds with 408 > response code, nginx does not send a redirect. > Instead it terminates the connection to the client. I think this is because of the nature of 408 error. It means the client is slow, (or slowloris). Probably the only sane thing to do in that case is to drop with an error. Though, looking at the documentation it seems you can configure when a request is slow with this directives: client_body_timeout client_header_timeout Check her e for more details: client_header_timeout > Is this the intended behavior? > Is there any way around it? See above > I'd like to be able to intercept 408 responses as well and redirect them. > Any alternative suggestions to implement it are very welcome. > > Thanks in advance, > Shmulikb > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256530,256530#msg-256530 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Regards, Frederik From mdounin at mdounin.ru Tue Feb 10 15:02:42 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 Feb 2015 18:02:42 +0300 Subject: nginx-1.7.10 Message-ID: <20150210150242.GE19012@mdounin.ru> Changes with nginx 1.7.10 10 Feb 2015 *) Feature: the "use_temp_path" parameter of the "proxy_cache_path", "fastcgi_cache_path", "scgi_cache_path", and "uwsgi_cache_path" directives. *) Feature: the $upstream_header_time variable. *) Workaround: now on disk overflow nginx tries to write error logs once a second only. *) Bugfix: the "try_files" directive did not ignore normal files while testing directories. Thanks to Damien Tournoud. *) Bugfix: alerts "sendfile() failed" if the "sendfile" directive was used on OS X; the bug had appeared in 1.7.8. *) Bugfix: alerts "sem_post() failed" might appear in logs. *) Bugfix: nginx could not be built with musl libc. Thanks to James Taylor. *) Bugfix: nginx could not be built on Tru64 UNIX. Thanks to Goetz T. Fischer. -- Maxim Dounin http://nginx.org/en/donation.html From kworthington at gmail.com Tue Feb 10 18:30:21 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 10 Feb 2015 13:30:21 -0500 Subject: [nginx-announce] nginx-1.7.10 In-Reply-To: <20150210150250.GF19012@mdounin.ru> References: <20150210150250.GF19012@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.10 for Windows http://goo.gl/tzDzKY (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Feb 10, 2015 at 10:02 AM, Maxim Dounin wrote: > Changes with nginx 1.7.10 10 Feb > 2015 > > *) Feature: the "use_temp_path" parameter of the "proxy_cache_path", > "fastcgi_cache_path", "scgi_cache_path", and "uwsgi_cache_path" > directives. > > *) Feature: the $upstream_header_time variable. > > *) Workaround: now on disk overflow nginx tries to write error logs > once > a second only. > > *) Bugfix: the "try_files" directive did not ignore normal files while > testing directories. > Thanks to Damien Tournoud. > > *) Bugfix: alerts "sendfile() failed" if the "sendfile" directive was > used on OS X; the bug had appeared in 1.7.8. > > *) Bugfix: alerts "sem_post() failed" might appear in logs. > > *) Bugfix: nginx could not be built with musl libc. > Thanks to James Taylor. > > *) Bugfix: nginx could not be built on Tru64 UNIX. > Thanks to Goetz T. Fischer. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 10 23:25:09 2015 From: nginx-forum at nginx.us (mex) Date: Tue, 10 Feb 2015 18:25:09 -0500 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? Message-ID: Google dumps SPDY in favour of HTTP/2, any plans ore roadmap for HTTP/2 in nginx? see https://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html "HTTP is the fundamental networking protocol that powers the web. The majority of sites use version 1.1 of HTTP, which was defined in 1999 with RFC2616. A lot has changed on the web since then, and a new version of the protocol named HTTP/2 is well on the road to standardization. We plan to gradually roll out support for HTTP/2 in Chrome 40 in the upcoming weeks. HTTP/2?s primary changes from HTTP/1.1 focus on improved performance. Some key features such as multiplexing, header compression, prioritization and protocol negotiation evolved from work done in an earlier open, but non-standard protocol named SPDY. Chrome has supported SPDY since Chrome 6, but since most of the benefits are present in HTTP/2, it?s time to say goodbye. We plan to remove support for SPDY in early 2016, and to also remove support for the TLS extension named NPN in favor of ALPN in Chrome at the same time. Server developers are strongly encouraged to move to HTTP/2 and ALPN." cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256561,256561#msg-256561 From igrigorik at gmail.com Wed Feb 11 00:38:29 2015 From: igrigorik at gmail.com (Ilya Grigorik) Date: Tue, 10 Feb 2015 16:38:29 -0800 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: Pedantic, but I object to the wording in the title :) ... SPDY was/is an experimental branch of HTTP/2, and now that HTTP/2 is in the final stages of becoming a standard, there is no longer the need for SPDY and hence the announcement of a deprecation timeline -- it's not and never was SPDY vs. HTTP/2. That aside... >From what I understand (at least from a few conversations at nginx.conf), there is already some existing efforts around enabling http/2 support? I'd love to see some official product plans and/or timelines as well. ig On Tue, Feb 10, 2015 at 3:25 PM, mex wrote: > Google dumps SPDY in favour of HTTP/2, any plans ore roadmap for HTTP/2 in > nginx? > > > see > https://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html > > "HTTP is the fundamental networking protocol that powers the web. The > majority of sites use version 1.1 of HTTP, which was defined in 1999 with > RFC2616. A lot has changed on the web since then, and a new version of the > protocol named HTTP/2 is well on the road to standardization. We plan to > gradually roll out support for HTTP/2 in Chrome 40 in the upcoming weeks. > > HTTP/2?s primary changes from HTTP/1.1 focus on improved performance. Some > key features such as multiplexing, header compression, prioritization and > protocol negotiation evolved from work done in an earlier open, but > non-standard protocol named SPDY. Chrome has supported SPDY since Chrome 6, > but since most of the benefits are present in HTTP/2, it?s time to say > goodbye. We plan to remove support for SPDY in early 2016, and to also > remove support for the TLS extension named NPN in favor of ALPN in Chrome > at > the same time. Server developers are strongly encouraged to move to HTTP/2 > and ALPN." > > > > cheers, > > mex > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256561,256561#msg-256561 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Feb 11 04:13:49 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Tue, 10 Feb 2015 23:13:49 -0500 Subject: buffering / uploading large files In-Reply-To: References: Message-ID: <050ff1f5fb8133235a9a0bb8f5176c7a.NginxMailingListEnglish@forum.nginx.org> Hi Kurt, where can I get a patch for nginx version 1.6.2 (the 'official' stable version as of today)? Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256563#msg-256563 From vbart at nginx.com Wed Feb 11 06:10:37 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 11 Feb 2015 09:10:37 +0300 Subject: buffering / uploading large files In-Reply-To: <050ff1f5fb8133235a9a0bb8f5176c7a.NginxMailingListEnglish@forum.nginx.org> References: <050ff1f5fb8133235a9a0bb8f5176c7a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5075700.2N4nLyx4AH@vbart-laptop> On Tuesday 10 February 2015 23:13:49 nginxuser100 wrote: > Hi Kurt, where can I get a patch for nginx version 1.6.2 (the 'official' > stable version as of today)? Thank you! > Why not to use the latest mainline version? See the difference: http://nginx.com/blog/nginx-1-6-1-7-released/ wbr, Valentin V. Bartenev From nginx-forum at nginx.us Wed Feb 11 14:22:23 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Wed, 11 Feb 2015 09:22:23 -0500 Subject: Need to remove folder name from URL Message-ID: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> Hi We're using proxy_pass to silently forward a website to a different URL. Here is the code location ^~/ { proxy_pass http://www.example.org/; } So let's say the site is example1.org. Going to www.example1.com//history correctly shows that page that would be visibile at www.example.org//history but I need to remove the folder name making it www.example1.org/history. How can I do this? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256577,256577#msg-256577 From nginx-forum at nginx.us Wed Feb 11 14:37:15 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 11 Feb 2015 09:37:15 -0500 Subject: Need to remove folder name from URL In-Reply-To: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> References: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maybe something like; rewrite //([^/]+) /$1 break; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256577,256578#msg-256578 From nginx-forum at nginx.us Wed Feb 11 15:03:41 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Wed, 11 Feb 2015 10:03:41 -0500 Subject: Need to remove folder name from URL In-Reply-To: References: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <12bda6a703c7e8a3a00271676098f797.NginxMailingListEnglish@forum.nginx.org> Thanks, trying that gives me www.example1.com//history but the pages doesnt display.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256577,256579#msg-256579 From nginx-forum at nginx.us Wed Feb 11 15:06:31 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 11 Feb 2015 10:06:31 -0500 Subject: Need to remove folder name from URL In-Reply-To: <12bda6a703c7e8a3a00271676098f797.NginxMailingListEnglish@forum.nginx.org> References: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> <12bda6a703c7e8a3a00271676098f797.NginxMailingListEnglish@forum.nginx.org> Message-ID: Enable debug and see the logs whats actually being rewritten. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256577,256580#msg-256580 From nginx-forum at nginx.us Wed Feb 11 16:35:03 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Wed, 11 Feb 2015 11:35:03 -0500 Subject: Need to remove folder name from URL In-Reply-To: References: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> <12bda6a703c7e8a3a00271676098f797.NginxMailingListEnglish@forum.nginx.org> Message-ID: Will do...in case this helps...here's my config server { listen 80; client_max_body_size 4G; server_name example1.com; passenger_enabled on; root /home//current/public; rails_env production; keepalive_timeout 5; location = / { proxy_pass http://example1.com/; // This is to silently redirect the root of http://example1.com to http://example1.com/ } location ~ ^/assets/ { root /home//current/public; expires max; add_header Cache-Control public; add_header ETag ""; break; } location /system { proxy_pass http://www.example.com; // This is to silently redirect http://example1.com/system to example.com/system, but to still show example1.com/system in the address bar } location ^~/wcmc { proxy_pass http://example.com/; // This silently redirects example1.com/ to example.com/ } } This all works in that going to example1.com gives me the page at example.com/ Clicking the links on the page take me to the correct page on example.com, but show example1.com in the addressbar. eg example1.com//history I need to hide the part of it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256577,256584#msg-256584 From nginx-forum at nginx.us Wed Feb 11 16:45:46 2015 From: nginx-forum at nginx.us (lmm5247) Date: Wed, 11 Feb 2015 11:45:46 -0500 Subject: Protect /analytics on Nginx with basic authentication, but allow access to .php and .js files?? Message-ID: Hey folks, Nginx noob here. I also posted here with no luck yet: http://forum.piwik.org/read.php?2,123492 I have Piwik setup and running on a Nginx webserver that I protected with HTTP basic authentication, as seen below. location /analytics { alias /var/www/piwik/; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/pass; try_files $uri $uri/ /index.php; } location ~ ^/analytics(.+\.php)$ { alias /var/www/piwik$1; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } I have it protected, but it's prompting to login on every page, due to the piwik.php and piwik.js files (necessary for analytics) being in my protected directory. This is described on Piwik's website, below. "If you use HTTP Authentication (Basic or Digest) on your Piwik files, you should exclude piwik.php and piwik.js from this authentication, or visitors on your website would be prompted with the authentication popup." My question is: what kind of Nginx rule can I use to protect all files in that directory, besides those two? Is it possible to do a negative regex match on a location block? Any help would be appreciated! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256585,256585#msg-256585 From nginx-forum at nginx.us Wed Feb 11 20:08:04 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Wed, 11 Feb 2015 15:08:04 -0500 Subject: buffering / uploading large files In-Reply-To: <5075700.2N4nLyx4AH@vbart-laptop> References: <5075700.2N4nLyx4AH@vbart-laptop> Message-ID: <13d64a726a456a7367a7f1aa6ce11763.NginxMailingListEnglish@forum.nginx.org> I was using 1.7.9 and it was crashing so I now go by the stable version 1.6.2 per http://nginx.org/en/download.html. Whichever version I use, I will need the fastcgi_request_buffering directive patch. Thanks. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256378,256591#msg-256591 From francis at daoine.org Wed Feb 11 20:09:17 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Feb 2015 20:09:17 +0000 Subject: Need to remove folder name from URL In-Reply-To: References: <41dcfb734955ac7d2b0add808e315c9b.NginxMailingListEnglish@forum.nginx.org> <12bda6a703c7e8a3a00271676098f797.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150211200917.GF13461@daoine.org> On Wed, Feb 11, 2015 at 11:35:03AM -0500, strtwtsn wrote: Hi there, > Clicking the links on the page take me to the correct page on example.com, > but show example1.com in the addressbar. > > eg > > example1.com//history > > I need to hide the part of it. It sounds like you want a request for /history/one to be proxy_pass:ed to http://example.com//history/one, no? "location ^~ /history/", proxy_pass (http://nginx.org/r/proxy_pass), and then you are responsible for "fixing" any links on the page if they refer to "" at all. Which part doesn't work? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Feb 11 20:21:30 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 11 Feb 2015 20:21:30 +0000 Subject: Protect /analytics on Nginx with basic authentication, but allow access to .php and .js files?? In-Reply-To: References: Message-ID: <20150211202130.GG13461@daoine.org> On Wed, Feb 11, 2015 at 11:45:46AM -0500, lmm5247 wrote: Hi there, > I have Piwik setup and running on a Nginx webserver that I protected with > HTTP basic authentication, as seen below. > > location /analytics { > alias /var/www/piwik/; > auth_basic "Restricted"; > auth_basic_user_file /etc/nginx/pass; > try_files $uri $uri/ /index.php; > } > I have it protected, but it's prompting to login on every page, due to the > piwik.php and piwik.js files (necessary for analytics) being in my protected > directory. This is described on Piwik's website, below. What actual requests are made that are challenged for authentication? Check your access_log for http 401. At a guess, it is just /analytics/piwik.js that you care about here. So: add location = /analytics/piwik.js {auth_basic off;} inside your "location /analytics {}" block. (This will try to serve the file "/var/www/piwik//piwik.js", given the above configuration.) > My question is: what kind of Nginx rule can I use to protect all files in > that directory, besides those two? Is it possible to do a negative regex > match on a location block? It is usually simpler to use positive matching. The nginx "location" rules usually let it be possible. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed Feb 11 20:33:44 2015 From: nginx-forum at nginx.us (shmulik) Date: Wed, 11 Feb 2015 15:33:44 -0500 Subject: catching 408 response from upstream server In-Reply-To: <54D9F741.3060107@postecom.it> References: <54D9F741.3060107@postecom.it> Message-ID: <957963b98177d438f6da1e8c48899a93.NginxMailingListEnglish@forum.nginx.org> Thank you. As i wrote in reply to B. R., i'm trying to catch an 408 response from an upstream server, and turn it into a 302 redirect. Since the upstream is the one answering with 408, i cannot control its logic by tweaking Nginx configuration. In my scenario, the client is "fast enough", as Nginx accepts its request and sends a request to the upstream. So theoretically Ngnix should consider the client "fast enough" to answer to it. So i still believe it should be valid to intercept this 408 response from upstream and turn in into a different response toward the client (302 in my case). Still, i accept the fact that my logic might be wrong, and i'm open to hear alternative reasoning. Thanks, Shmulik Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256530,256598#msg-256598 From nginx-forum at nginx.us Wed Feb 11 20:34:45 2015 From: nginx-forum at nginx.us (shmulik) Date: Wed, 11 Feb 2015 15:34:45 -0500 Subject: catching 408 response from upstream server In-Reply-To: References: Message-ID: <9493f2482f266fb935f9cdf1242cb12d.NginxMailingListEnglish@forum.nginx.org> Hi, Below is the debug_http log showing the same location handling upstream response of 409, and next of 408. You can see that the 408 response is handled differently: // ----------------409 -------------- // 2015/02/11 22:07:03 [debug] 20611#0: *3 http upstream process header 2015/02/11 22:07:03 [debug] 20611#0: *3 http proxy header done 2015/02/11 22:07:03 [debug] 20611#0: *3 finalize http upstream request: 409 2015/02/11 22:07:03 [debug] 20611#0: *3 finalize http proxy request 2015/02/11 22:07:03 [debug] 20611#0: *3 free rr peer 1 0 2015/02/11 22:07:03 [debug] 20611#0: *3 close http upstream connection: 12 2015/02/11 22:07:03 [debug] 20611#0: *3 http finalize request: 409, "/proxy/host_www.a.com/fallback_www.b.com/" a:1, c:1 2015/02/11 22:07:03 [debug] 20611#0: *3 http special response: 409, "/proxy/host_www.a.com/fallback_www.b.com/" // <-- handled as special response 2015/02/11 22:07:03 [debug] 20611#0: *3 http script var: "http://www.b.com/" 2015/02/11 22:07:03 [debug] 20611#0: *3 http log handler 2015/02/11 22:07:03 [debug] 20611#0: *3 HTTP/1.1 302 Moved Temporarily Server: nginx Date: Wed, 11 Feb 2015 20:07:03 GMT Content-Type: text/html Content-Length: 149 Connection: keep-alive Location: http://www.b.com/ // ------------- 408 -------------- // 2015/02/11 22:05:03 [debug] 20611#0: *1 http upstream process header 2015/02/11 22:05:03 [debug] 20611#0: *1 http proxy header done 2015/02/11 22:05:03 [debug] 20611#0: *1 finalize http upstream request: 408 2015/02/11 22:05:03 [debug] 20611#0: *1 finalize http proxy request 2015/02/11 22:05:03 [debug] 20611#0: *1 free rr peer 1 0 2015/02/11 22:05:03 [debug] 20611#0: *1 close http upstream connection: 12 2015/02/11 22:05:03 [debug] 20611#0: *1 http finalize request: 408, "/proxy/host_www.a.com/fallback_www.b.com/" a:1, c:1 2015/02/11 22:05:03 [debug] 20611#0: *1 http terminate request count:1 // <-- handled by terminating the connection 2015/02/11 22:05:03 [debug] 20611#0: *1 http terminate cleanup count:1 blk:0 2015/02/11 22:05:03 [debug] 20611#0: *1 http posted request: "/proxy/host_www.a.com/fallback_www.b.com/" 2015/02/11 22:05:03 [debug] 20611#0: *1 http terminate handler count:1 2015/02/11 22:05:03 [debug] 20611#0: *1 http request count:1 blk:0 2015/02/11 22:05:03 [debug] 20611#0: *1 http close request I've tried to follow this in code (by the way, i'm using version 1.6.2, sorry i didn't mention it earlier). It seems that indeed response code 408 is handled differently. in function "ngx_http_finalize_request" i can see that code 408 is handled by calling "ngx_http_terminate_request", while other codes (like 409) are handled a few lines later with "ngx_http_special_response_handler" which also handles the error_page directive. I've read in a different post in this forum that Nginx, by design, terminates the connection when it's configured to respond with 408 response code - for example when using "return 408" in a location. The post can be found here: http://forum.nginx.org/read.php?2,173195,173235#msg-173235 However in my case, i'm not generating an 408 response in Nginx. I'm trying to catch an 408 response from the upstream server, and turn it into a redirect. As you said, the documentation doesn't say anything about error_page handling response code 408 differently, so i can't figure out if this behavior is intentional or not. If intentional - i'd really like to understand the logic behind it, perhaps i'm looking at it all wrong. If it's not intentional - i'd really like to find a way around this, to return a redirect to the client. Thank you for your patience and help, ShmulikB Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256530,256599#msg-256599 From nginx-forum at nginx.us Wed Feb 11 20:47:52 2015 From: nginx-forum at nginx.us (shmulik) Date: Wed, 11 Feb 2015 15:47:52 -0500 Subject: catching 408 response from upstream server In-Reply-To: References: Message-ID: Hi, Below is the debug_http log showing the same location handling upstream response of 409, and next of 408. You can see that the 408 response is handled differently: // ----------------409 -------------- // 2015/02/11 22:07:03 [debug] 20611#0: *3 http upstream process header 2015/02/11 22:07:03 [debug] 20611#0: *3 http proxy header done 2015/02/11 22:07:03 [debug] 20611#0: *3 finalize http upstream request: 409 2015/02/11 22:07:03 [debug] 20611#0: *3 finalize http proxy request 2015/02/11 22:07:03 [debug] 20611#0: *3 free rr peer 1 0 2015/02/11 22:07:03 [debug] 20611#0: *3 close http upstream connection: 12 2015/02/11 22:07:03 [debug] 20611#0: *3 http finalize request: 409, "/proxy/host_www.a.com/fallback_www.b.com/" a:1, c:1 2015/02/11 22:07:03 [debug] 20611#0: *3 http special response: 409, "/proxy/host_www.a.com/fallback_www.b.com/" // <-- handled as special response 2015/02/11 22:07:03 [debug] 20611#0: *3 http script var: "http://www.b.com/" 2015/02/11 22:07:03 [debug] 20611#0: *3 http log handler 2015/02/11 22:07:03 [debug] 20611#0: *3 HTTP/1.1 302 Moved Temporarily Server: nginx Date: Wed, 11 Feb 2015 20:07:03 GMT Content-Type: text/html Content-Length: 149 Connection: keep-alive Location: http://www.b.com/ // ------------- 408 -------------- // 2015/02/11 22:05:03 [debug] 20611#0: *1 http upstream process header 2015/02/11 22:05:03 [debug] 20611#0: *1 http proxy header done 2015/02/11 22:05:03 [debug] 20611#0: *1 finalize http upstream request: 408 2015/02/11 22:05:03 [debug] 20611#0: *1 finalize http proxy request 2015/02/11 22:05:03 [debug] 20611#0: *1 free rr peer 1 0 2015/02/11 22:05:03 [debug] 20611#0: *1 close http upstream connection: 12 2015/02/11 22:05:03 [debug] 20611#0: *1 http finalize request: 408, "/proxy/host_www.a.com/fallback_www.b.com/" a:1, c:1 2015/02/11 22:05:03 [debug] 20611#0: *1 http terminate request count:1 // <-- handled by terminating the connection 2015/02/11 22:05:03 [debug] 20611#0: *1 http terminate cleanup count:1 blk:0 2015/02/11 22:05:03 [debug] 20611#0: *1 http posted request: "/proxy/host_www.a.com/fallback_www.b.com/" 2015/02/11 22:05:03 [debug] 20611#0: *1 http terminate handler count:1 2015/02/11 22:05:03 [debug] 20611#0: *1 http request count:1 blk:0 2015/02/11 22:05:03 [debug] 20611#0: *1 http close request I've tried to follow this in code (by the way, i'm using version 1.6.2, sorry i didn't mention it earlier). It seems that indeed response code 408 is handled differently. in function "ngx_http_finalize_request" i can see that code 408 is handled by calling "ngx_http_terminate_request", while other codes (like 409) are handled a few lines later with "ngx_http_special_response_handler" which also handles the error_page directive. I've read in a different post in this forum that Nginx, by design, terminates the connection when it's configured to respond with 408 response code - for example when using "return 408" in a location. The post can be found here: http://forum.nginx.org/read.php?2,173195,173235#msg-173235 However in my case, i'm not generating an 408 response in Nginx. I'm trying to catch an 408 response from the upstream server, and turn it into a redirect. As you said, the documentation doesn't say anything about error_page handling response code 408 differently, so i can't figure out if this behavior is intentional or not. If intentional - i'd really like to understand the logic behind it, perhaps i'm looking at it all wrong. If it's not intentional - i'd really like to find a way around this, to return a redirect to the client. Thank you for your patience and help, ShmulikB Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256530,256597#msg-256597 From nginx-forum at nginx.us Wed Feb 11 21:04:49 2015 From: nginx-forum at nginx.us (TaiSHi) Date: Wed, 11 Feb 2015 16:04:49 -0500 Subject: Typo in proxy_cache_key documentation Message-ID: <089b7375d04149ec848781c602c0d778.NginxMailingListEnglish@forum.nginx.org> Default isn't $scheme$proxy_host$request_uri, but rather $scheme://$proxy_host$request_uri Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256600,256600#msg-256600 From nginx-forum at nginx.us Wed Feb 11 22:53:38 2015 From: nginx-forum at nginx.us (sdenthumdas) Date: Wed, 11 Feb 2015 17:53:38 -0500 Subject: GET requests returning 404 Message-ID: <6329593fa9d48fa4c54ffbf8446942ad.NginxMailingListEnglish@forum.nginx.org> Hi, We are using Play for the backend and using NGinx to serve requests for static files from local. Below is the config that we are using upstream backend { server x.x.x.x:9000; } server { listen 0.0.0.0:8082; server_name localhost; location /client { root /static; expires off; sendfile off; } location / { proxy_set_header Access-Control-Allow-Origin *; proxy_set_header 'Access-Control-Allow-Credentials' 'true'; proxy_set_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin'; proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, HEAD'; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://backend; proxy_redirect off; } } Some requests like "GET /api/alerts" returns 404 Now here is the wierd part. When I update the configuration like below, basically copying the same config for this specific uri path, I get 200 OK back upstream backend { server x.x.x.x:9000; } server { listen 0.0.0.0:8082; server_name localhost; location /client { root /static; expires off; sendfile off; } location /api { proxy_set_header Access-Control-Allow-Origin *; proxy_set_header 'Access-Control-Allow-Credentials' 'true'; proxy_set_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin'; proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, HEAD'; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://backend; proxy_redirect off; } location / { proxy_set_header Access-Control-Allow-Origin *; proxy_set_header 'Access-Control-Allow-Credentials' 'true'; proxy_set_header 'Access-Control-Allow-Headers' 'X-Requested-With,Accept,Content-Type, Origin'; proxy_set_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS, PUT, DELETE, HEAD'; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-NginX-Proxy true; proxy_pass http://backend; proxy_redirect off; } } what could be the issue? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256601,256601#msg-256601 From neubyr at gmail.com Thu Feb 12 00:07:05 2015 From: neubyr at gmail.com (neubyr) Date: Wed, 11 Feb 2015 16:07:05 -0800 Subject: Intermittent 500 errors with Nginx reverse proxy for Apache PHP FPM Message-ID: I have Nginx as reverse proxy in front of Apache. Nginx is handling most of the redirects and static content. Apache is handling PHP requests using fastcgi php-fpm. This setup is giving intermittent 500 errors, however, there are no errors when Apache is serving traffic directly. Below are some key parameters: * Apache is using worker MPMs. ServerLimit 2048 ThreadLimit 100 StartServers 10 MinSpareThreads 30 MaxSpareThreads 100 ThreadsPerChild 64 MaxClients 2048 MaxRequestsPerChild 5000 Nginx is using 2 worker processes and worker_connections is 1024. Proxy connections are configured using proxy_pass and not upstream. Keep-alive is disabled on both Apache and Nginx. Apache access logs show 500 return code, but apache error logs don't contain any information. php-fpm logs are empty as well. Nginx debug logs indicates: 2015/02/10 21:21:39 [debug] 10657#0: connect to 127.0.0.1:8080, fd:60 #50 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream connect: -2 2015/02/10 21:21:39 [debug] 10657#0: *49 posix_memalign: 000000000129B290:128 @16 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer add: 60: 60000:1423632159180 2015/02/10 21:21:39 [debug] 10657#0: *49 http finalize request: -4, ?/reviews/truelist?? a:1, c:2 2015/02/10 21:21:39 [debug] 10657#0: *49 http request count:2 blk:0 2015/02/10 21:21:39 [debug] 10657#0: *49 post event 00000000012516A8 2015/02/10 21:21:39 [debug] 10657#0: *49 post event 0000000001251710 2015/02/10 21:21:39 [debug] 10657#0: *49 delete posted event 0000000001251710 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream request: ?/reviews/truelist?? 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream send request handler 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream send request 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer buf fl:1 s:369 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer in: 000000000129C1D8 2015/02/10 21:21:39 [debug] 10657#0: *49 writev: 369 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer out: 0000000000000000 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer del: 60: 1423632159180 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer add: 60: 60000:1423632159181 2015/02/10 21:21:39 [debug] 10657#0: *49 delete posted event 00000000012516A8 2015/02/10 21:21:39 [debug] 10657#0: *49 http run request: ?/reviews/truelist?? 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream check client, write event:1, ?/reviews/truelist? 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream recv(): -1 (11: Resource temporarily unavailable) 2015/02/10 21:21:39 [debug] 10657#0: post event 00000000012362B0 Another thing I noticed is that http.workers have dropped from 30-40 workers to 2 workers with nginx. When Apache is serving traffic without Nginx, there are no 500 errors, but it has more workers active. Any pointers on debugging 500 errors will be really helpful. -- Thanks, N -------------- next part -------------- An HTML attachment was scrubbed... URL: From steve at greengecko.co.nz Thu Feb 12 02:10:00 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Thu, 12 Feb 2015 15:10:00 +1300 Subject: Intermittent 500 errors with Nginx reverse proxy for Apache PHP FPM In-Reply-To: References: Message-ID: <1423707000.14176.271.camel@steve-new> If the apache access logs are showing a 500, then it's pointing to a PHP problem isn't it??? What's in the php-fpm logs? BTW if you're running php as fpm, why are you using apache at all? I certainly don't. Cheers, Steve On Wed, 2015-02-11 at 16:07 -0800, neubyr wrote: > > I have Nginx as reverse proxy in front of Apache. Nginx is handling > most of the redirects and static content. Apache is handling PHP > requests using fastcgi php-fpm. This setup is giving intermittent 500 > errors, however, there are no errors when Apache is serving traffic > directly. > > > Below are some key parameters: > * Apache is using worker MPMs. > ServerLimit 2048 > ThreadLimit 100 > StartServers 10 > MinSpareThreads 30 > MaxSpareThreads 100 > ThreadsPerChild 64 > MaxClients 2048 > MaxRequestsPerChild 5000 > > > Nginx is using 2 worker processes and worker_connections is 1024. > Proxy connections are configured using proxy_pass and not upstream. > > > > > Keep-alive is disabled on both Apache and Nginx. > > > Apache access logs show 500 return code, but apache error logs don't > contain any information. php-fpm logs are empty as well. > > > > > > Nginx debug logs indicates: > > > 2015/02/10 21:21:39 [debug] 10657#0: connect to 127.0.0.1:8080, fd:60 > #50 > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream connect: -2 > 2015/02/10 21:21:39 [debug] 10657#0: *49 posix_memalign: > 000000000129B290:128 @16 > 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer add: 60: > 60000:1423632159180 > 2015/02/10 21:21:39 [debug] 10657#0: *49 http finalize request: -4, > ?/reviews/truelist?? a:1, c:2 > 2015/02/10 21:21:39 [debug] 10657#0: *49 http request count:2 blk:0 > 2015/02/10 21:21:39 [debug] 10657#0: *49 post event 00000000012516A8 > 2015/02/10 21:21:39 [debug] 10657#0: *49 post event 0000000001251710 > 2015/02/10 21:21:39 [debug] 10657#0: *49 delete posted event > 0000000001251710 > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream request: > ?/reviews/truelist?? > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream send request > handler > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream send request > 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer buf fl:1 s:369 > 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer in: > 000000000129C1D8 > 2015/02/10 21:21:39 [debug] 10657#0: *49 writev: 369 > 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer out: > 0000000000000000 > 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer del: 60: > 1423632159180 > 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer add: 60: > 60000:1423632159181 > 2015/02/10 21:21:39 [debug] 10657#0: *49 delete posted event > 00000000012516A8 > 2015/02/10 21:21:39 [debug] 10657#0: *49 http run request: > ?/reviews/truelist?? > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream check client, > write event:1, ?/reviews/truelist? > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream recv(): -1 (11: > Resource temporarily unavailable) > 2015/02/10 21:21:39 [debug] 10657#0: post event 00000000012362B0 > > > > > > > Another thing I noticed is that http.workers have dropped from 30-40 > workers to 2 workers with nginx. When Apache is serving traffic > without Nginx, there are no 500 errors, but it has more workers > active. > > > > > > > Any pointers on debugging 500 errors will be really helpful. > > > > > -- > Thanks, > N > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From neubyr at gmail.com Thu Feb 12 07:47:49 2015 From: neubyr at gmail.com (neubyr) Date: Wed, 11 Feb 2015 23:47:49 -0800 Subject: Intermittent 500 errors with Nginx reverse proxy for Apache PHP FPM In-Reply-To: <1423707000.14176.271.camel@steve-new> References: <1423707000.14176.271.camel@steve-new> Message-ID: As errors are seen only when Nginx is front of Apache, I thought there is some configuration parameter tuning needed when Nginx is in picture. My plan is to migrate to Nginx completely, but I can't do it right now - N On Wed, Feb 11, 2015 at 6:10 PM, Steve Holdoway wrote: > If the apache access logs are showing a 500, then it's pointing to a PHP > problem isn't it??? What's in the php-fpm logs? > > BTW if you're running php as fpm, why are you using apache at all? I > certainly don't. > > Cheers, > > > Steve > > On Wed, 2015-02-11 at 16:07 -0800, neubyr wrote: > > > > I have Nginx as reverse proxy in front of Apache. Nginx is handling > > most of the redirects and static content. Apache is handling PHP > > requests using fastcgi php-fpm. This setup is giving intermittent 500 > > errors, however, there are no errors when Apache is serving traffic > > directly. > > > > > > Below are some key parameters: > > * Apache is using worker MPMs. > > ServerLimit 2048 > > ThreadLimit 100 > > StartServers 10 > > MinSpareThreads 30 > > MaxSpareThreads 100 > > ThreadsPerChild 64 > > MaxClients 2048 > > MaxRequestsPerChild 5000 > > > > > > Nginx is using 2 worker processes and worker_connections is 1024. > > Proxy connections are configured using proxy_pass and not upstream. > > > > > > > > > > Keep-alive is disabled on both Apache and Nginx. > > > > > > Apache access logs show 500 return code, but apache error logs don't > > contain any information. php-fpm logs are empty as well. > > > > > > > > > > > > Nginx debug logs indicates: > > > > > > 2015/02/10 21:21:39 [debug] 10657#0: connect to 127.0.0.1:8080, fd:60 > > #50 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream connect: -2 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 posix_memalign: > > 000000000129B290:128 @16 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer add: 60: > > 60000:1423632159180 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http finalize request: -4, > > ?/reviews/truelist?? a:1, c:2 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http request count:2 blk:0 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 post event 00000000012516A8 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 post event 0000000001251710 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 delete posted event > > 0000000001251710 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream request: > > ?/reviews/truelist?? > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream send request > > handler > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream send request > > 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer buf fl:1 s:369 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer in: > > 000000000129C1D8 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 writev: 369 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 chain writer out: > > 0000000000000000 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer del: 60: > > 1423632159180 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 event timer add: 60: > > 60000:1423632159181 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 delete posted event > > 00000000012516A8 > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http run request: > > ?/reviews/truelist?? > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream check client, > > write event:1, ?/reviews/truelist? > > 2015/02/10 21:21:39 [debug] 10657#0: *49 http upstream recv(): -1 (11: > > Resource temporarily unavailable) > > 2015/02/10 21:21:39 [debug] 10657#0: post event 00000000012362B0 > > > > > > > > > > > > > > Another thing I noticed is that http.workers have dropped from 30-40 > > workers to 2 workers with nginx. When Apache is serving traffic > > without Nginx, there are no 500 errors, but it has more workers > > active. > > > > > > > > > > > > > > Any pointers on debugging 500 errors will be really helpful. > > > > > > > > > > -- > > Thanks, > > N > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- > Steve Holdoway BSc(Hons) MIITP > http://www.greengecko.co.nz > Linkedin: http://www.linkedin.com/in/steveholdoway > Skype: sholdowa > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 12 08:06:26 2015 From: nginx-forum at nginx.us (strtwtsn) Date: Thu, 12 Feb 2015 03:06:26 -0500 Subject: Need to remove folder name from URL In-Reply-To: <20150211200917.GF13461@daoine.org> References: <20150211200917.GF13461@daoine.org> Message-ID: <6f1481503536e53d52d3022af318e27f.NginxMailingListEnglish@forum.nginx.org> We've got a website that is available at example.com. A subsection of the page is available at example.com/folder_name/page1 example.com/folder_name/page2 etc We need the pages below folder_name to be accessible at example1.com/folder_name/page1 example1.com/folder_name/page2 etc This is working apart from i've been asked to make the pages below folder_name accessible without the folder_name So it would be example1.com/page1 example1.com/page2. They will still be below the folder_name, but the folder name must not show in the address bar. Hope this helps. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256577,256612#msg-256612 From julian at simioni.org Thu Feb 12 10:02:34 2015 From: julian at simioni.org (Julian Simioni) Date: Thu, 12 Feb 2015 11:02:34 +0100 Subject: Does ssl_trusted_certificate actually send certs to client? Message-ID: <20150212095752.GA4279@laguna.lan> Hi all, I have an Nginx 1.7.6 server serving HTTPS content, and I've been tweaking the configuration lately to ensure it is secure and performant[1]. One component of this is ensuring that the intermediate certificate from my CA is sent along to any clients connecting to my server, to ensure they don't have to fetch it from somewhere else and risk at best a longer connection time, and at worst some sort of (unlikely) tampering. The traditional way to do this, as far as I'm aware, is to concatenate any intermediate certs, as well as the actual certificate for your domain, into one file, and then tell Nginx about it using the ssl_client_certificate directive. This works great, but I wanted to see if there was a way to keep the different certificates in different files, just for clarity and ease of maintenance. I put the intermediate cert in another file and told Nginx about it with the ssl_trusted_certificate directive, and everything worked great! However, the docs[2] for ssl_trusted_certificate specifically state the following: In contrast to the certificate set by ssl_client_certificate, the list of these certificates will not be sent to clients. This seems to be at odds with what I'm experiencing. At first I thought it was possible that the certificate was sent because I had ssl_stapling set to on, to ensure OCSP responses are also included, but turning that option off still sends the intermediate cert when new connections are being initialized. Only removing the ssl_trusted_certificate line from my config causes the SSL Test to show that not all intermediate certs are sent. A nearly un-modified copy of my configs can be found on Github[3], and I would very much like to know if my configuration is working because I am misunderstanding something (by far the most likely), because the docs are wrong, because there is a bug in Nginx, or something else. Thanks, Julian [1] Mostly by following the SSL Labs Server Test https://www.ssllabs.com/ssltest/index.html [2] http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_trusted_certificate [3] https://github.com/orangejulius/https-on-nginx/blob/master/ssl.conf and https://github.com/orangejulius/https-on-nginx/blob/master/example-site.conf -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: Digital signature URL: From hobson42 at gmail.com Thu Feb 12 12:06:52 2015 From: hobson42 at gmail.com (Ian) Date: Thu, 12 Feb 2015 12:06:52 +0000 Subject: keep-alive message flood. Message-ID: <54DC975C.4090207@gmail.com> Hi all, I am running an application that uses the nginx_http_push_module-0.712 push module. The relevant set up is server { listen 443 default ssl; ## SSL Certs ssl on; ssl_certificate /ssl_keys/coachmaster.co.uk.crt; ssl_certificate_key /ssl_keys/coachmaster.co.uk.key; ssl_ciphers HIGH:!ADH:!MD5; ssl_prefer_server_ciphers on; ssl_protocols TLSv1; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; # server_name example.co.uk www.example.co.uk; root /var/www/example.co.uk/htsecure; access_log /var/www/example.co.uk/access.log; index index.php; # # serve php via fastcgi if it exists location ~ \.php$ { # try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param CENTRAL_ROOT $document_root; fastcgi_param RESELLER_ROOT $document_root; fastcgi_param ENVIRONMENT production; fastcgi_param HTTPS ON; } # serve static files try_files $uri $uri/ /index.php ; expires 30m; # set up publish/subscribe push_store_messages on; location /publish { push_publisher; set $push_channel_id $arg_id; push_message_timeout 10s; push_max_message_buffer_length 30; } location /activity { push_subscriber; push_subscriber_concurrency broadcast; set $push_channel_id $arg_id; default_type text/plain; } } The trouble is that this has been working fine for about a year, but a new user is reporting problems. Everyone else continues to have no problems. He is reporting long delays in both IE11 and Chrome. He is in Germany behind a corporate proxy/firewall. When he tries it from home, all works OK, so the proxy/firewall is prime suspect. For political reasons (quoting security - ha!) the proxy cannot be altered. Looking at the server logs from when he was having problems, he is sending a lot of AJAX requests to the /activity URL, and receiving what appear to be empty replies (with reply code 200 OK). These rattle through at top speed. After anything from 10 to 200 such exchanges, he gets a larger message (also with 200 OK) that is so delayed it crashes the application with a "Missing messages" report. The jquery call his code it making, already has nocache, so the URL contains a "&_= parameter to break any caching. Oddly when I added a similar tweak to break the cache, I managed to trigger the same behaviour. There was no proxy in my setup, and I have disabled browser caching. My round trip was 6ms, so I did not see long enough delays to crash things. However I could not find out what was causing the problem so I took out my tweak. It only went in, because I suspected the nocache was being ignored. Ideas anyone? I'm stumped. Anyone understand how a proxy could mess things up for Nginx? How can I prove its the proxy? Crucially how can I compensate? Thanks Ian -- Ian Hobson Mid Auchentiber, Auchentiber, Kilwinning, North Ayrshire KA13 7RR Tel: 0203 287 1392 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From mdounin at mdounin.ru Thu Feb 12 13:11:48 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Feb 2015 16:11:48 +0300 Subject: Does ssl_trusted_certificate actually send certs to client? In-Reply-To: <20150212095752.GA4279@laguna.lan> References: <20150212095752.GA4279@laguna.lan> Message-ID: <20150212131148.GS19012@mdounin.ru> Hello! On Thu, Feb 12, 2015 at 11:02:34AM +0100, Julian Simioni wrote: > Hi all, > I have an Nginx 1.7.6 server serving HTTPS content, and I've been > tweaking the configuration lately to ensure it is secure and > performant[1]. > > One component of this is ensuring that the intermediate certificate from > my CA is sent along to any clients connecting to my server, to ensure > they don't have to fetch it from somewhere else and risk at best a > longer connection time, and at worst some sort of (unlikely) tampering. > > The traditional way to do this, as far as I'm aware, is to concatenate > any intermediate certs, as well as the actual certificate for your > domain, into one file, and then tell Nginx about it using the > ssl_client_certificate directive. This works great, but I wanted to see > if there was a way to keep the different certificates in different > files, just for clarity and ease of maintenance. I put the intermediate > cert in another file and told Nginx about it with the > ssl_trusted_certificate directive, and everything worked great! Both ssl_client_certificate and ssl_trusted_certificate will load certificates to the trusted store, and OpenSSL will use these certs to build the certificate chain at runtime if one wasn't explicitly provided. That is, it's a [mis]feature of the OpenSSL library which leads to such behaviour. While one can use this to construct certificate chains as of now, it's not a recommended approach because: - this consumes more CPU power, as the chain will be constructed at runtime; - this is not something we (at least I) consider to be a good feature, and if/when it will be possible to stop OpenSSL from doing this - we'll do so. > However, the docs[2] for ssl_trusted_certificate specifically state the > following: > > In contrast to the certificate set by ssl_client_certificate, the list > of these certificates will not be sent to clients. This note is not about certificate chain sent to the client, but about the _list_ of certificates sent to clients while requesting client certificates. See RFC5246, 7.4.4. Certificate Request, https://tools.ietf.org/html/rfc5246#section-7.4.4 - the list is sent in the certificate_authorities field of the Certificate Request message to let clients know which authorities are accepted by the server. -- Maxim Dounin http://nginx.org/ From eliezer at ngtech.co.il Thu Feb 12 15:13:16 2015 From: eliezer at ngtech.co.il (Eliezer Croitoru) Date: Thu, 12 Feb 2015 17:13:16 +0200 Subject: keep-alive message flood. In-Reply-To: <54DC975C.4090207@gmail.com> References: <54DC975C.4090207@gmail.com> Message-ID: <54DCC30C.8040508@ngtech.co.il> Hey Ian, I am not nginx expert but I would try by mimicking a similar setup with a proxy to understand how it runs. If you can get tcpdump dumps from the client and server side you might be able to notice the different sides of the issue. If your nginx server is doing it's job then you will probably see it in the server dumps. In any case when you will have a tcpdump you might even discover that there is something that the proxy does to the request which will result in understanding what is actually the issue.. In general a proxy can mess things up in too many ways then you can anticipate. If you have both the client and server side requests\responses dumps you will have the bigger picture in hands. If you need the exact tcpdump commands I will be happy to help you with it. Eliezer On 12/02/2015 14:06, Ian wrote: > Ideas anyone? I'm stumped. > > Anyone understand how a proxy could mess things up for Nginx? How > can I prove its the proxy? Crucially how can I compensate? > > Thanks > Ian From nginx-forum at nginx.us Thu Feb 12 21:11:21 2015 From: nginx-forum at nginx.us (lmm5247) Date: Thu, 12 Feb 2015 16:11:21 -0500 Subject: Protect /analytics on Nginx with basic authentication, but allow access to .php and .js files?? In-Reply-To: <20150211202130.GG13461@daoine.org> References: <20150211202130.GG13461@daoine.org> Message-ID: > What actual requests are made that are challenged for > authentication? Check your access_log for http 401. > > At a guess, it is just /analytics/piwik.js that you care about here. > > So: add > > location = /analytics/piwik.js {auth_basic off;} > > inside your "location /analytics {}" block. > > (This will try to serve the file "/var/www/piwik//piwik.js", given the > above configuration.) Wow. I feel so dumb. That worked perfectly! Below is the config I'm using to turn off authentication for piwik.js as well as .php files. location /analytics { alias /var/www/piwik/; auth_basic "Restricted"; auth_basic_user_file /etc/nginx/pass; try_files $uri $uri/ /index.php; location = /analytics/piwik.js{ auth_basic off; } location ~* ^/analytics(.+\.php)$ { auth_basic off; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } Thank you!!! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256585,256630#msg-256630 From francis at daoine.org Thu Feb 12 22:01:52 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 12 Feb 2015 22:01:52 +0000 Subject: Need to remove folder name from URL In-Reply-To: <6f1481503536e53d52d3022af318e27f.NginxMailingListEnglish@forum.nginx.org> References: <20150211200917.GF13461@daoine.org> <6f1481503536e53d52d3022af318e27f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150212220152.GH13461@daoine.org> On Thu, Feb 12, 2015 at 03:06:26AM -0500, strtwtsn wrote: Hi there, > We need the pages below folder_name to be accessible at > example1.com/folder_name/page1 example1.com/folder_name/page2 etc location /folder_name/ { proxy_pass http://example.com; } So when the user makes a request of nginx for /folder_name/page1, nginx does proxy_pass to example.com/folder_name/page1 and returns the content, yes? > This is working apart from i've been asked to make the pages below > folder_name accessible without the folder_name > > So it would be example1.com/page1 example1.com/page2. They will still be > below the folder_name, but the folder name must not show in the address > bar. So when the user makes a request of nginx for /page1, nginx should do proxy_pass to example.com/folder_name/page1 and return the content, yes? If "no", what should nginx do when the user requests /page1? > Hope this helps. I think I still understand the same thing. And I do not understand where the problem is. location / { proxy_pass http://example.com/folder_name/; } and then presumably extra locations for the requests that should not be handled by this proxy_pass mechanism. What happened when you tried it? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Feb 12 22:11:13 2015 From: nginx-forum at nginx.us (noahlh) Date: Thu, 12 Feb 2015 17:11:13 -0500 Subject: Request for thoughts / feedback: Guide on Nginx Monitoring Message-ID: Hi everyone - first off, many thanks for the wealth of knowledge on the forum / mailing list. I've been learning the nitty gritty of Nginx over the past few months this has been a hugely valuable resource. I've put together a guide on monitoring production Nginx systems (along with some background information on metrics and system variables that goes a bit deeper than the official docs.) I'd love to get some feedback on things I've gotten wrong / right / sorta right. Any feedback at all is appreciated. Also - hope this info is helpful to someone in some context either now or in the future. https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide https://www.scalyr.com/community/guides/an-in-depth-guide-to-nginx-metrics --Noah Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256634,256634#msg-256634 From nginx-forum at nginx.us Fri Feb 13 06:07:15 2015 From: nginx-forum at nginx.us (malintha) Date: Fri, 13 Feb 2015 01:07:15 -0500 Subject: Nginx add location details to URL when we stop decoding URL Message-ID: <475983802d17c1639c85b9877d86b288.NginxMailingListEnglish@forum.nginx.org> I am accessing a URL which has encode characters http:....../malintha/tel%3A%2B6281808147137 location /gateway/ { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_read_timeout 5m; proxy_send_timeout 5m; proxy_pass http://10.1.1.1:9443$request_uri/; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; } I added $request_uri at then end of the proxy_pass URL as I have to stop decoding by nginx. When I configure like this nginx resolve it to (stop decoding but incorrect URL - adding /gateway/) /gateway/malintha/tel%3A%2B6281808147137 but When I remove $request_uri it resolve to correct URL (but with decoding) How can I resolve this ? malintha/tel:+6281808147137 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256639,256639#msg-256639 From kevin_wangxk at sina.cn Fri Feb 13 06:55:38 2015 From: kevin_wangxk at sina.cn (kevin_wangxk at sina.cn) Date: Fri, 13 Feb 2015 14:55:38 +0800 Subject: keep-alive confused Message-ID: <20150213065538.E7A7A5B012B@webmail.sinamail.sina.com.cn> hello, I use nginx as a reverse proxy, and I find a question that confused me. In upstream loction I use keepalive direction to keep a connection pool. As everyone knows, that can reduse tcp handshake cost. When I send request with "Connection: keep-alive" header using http /1.0 protocal to nginx, I find nginx transmits the request to backend with "Connection: close" header. You know, "Connection: close" header will lead backend close the connection, and keepalive direction cannt work. for example below: //////////////////////////////////////////////////////////////////////////////////////////////////////////////// GET /2/proxyauth/auth.json?source=84883&&access_token=2.00JGc2QCCPl5PDe1e566dDfAD&object_id=1022:100101B2094650D064A0F5469D&uid=1042:2673106753&auth_type=option HTTP/1.0 Connection: keep-alive Client-Uri: /2/proxy/darwin/movie/mblog_list.json?source=848343583&&access_token=2.00JGc2QCCPl5PDe1e566d3b9NzDfAD&object_id=1022:100101B2094650D064A0F5469D&uid=1042:2673106753 Client-Host: localhost Host: oauthweibo User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 Accept: */* HTTP/1.1 200 OK Server: nginx/1.6.2 Date: Fri, 13 Feb 2015 06:47:39 GMT Content-Type: application/json;charset=UTF-8 Connection: close Api-Server-IP: 10.75.25.127 Vary: Accept-Encoding ///////////////////////////////////////////////////////////////////////////////////// Anybody meet that problem? Ideas anyone? thanks very much. When nginx transmits requests to backend, it uses http /1.0 protocal. Why is that? Thanks. ------------------ Regards? xiaokai From nginx-forum at nginx.us Fri Feb 13 07:34:38 2015 From: nginx-forum at nginx.us (mex) Date: Fri, 13 Feb 2015 02:34:38 -0500 Subject: Request for thoughts / feedback: Guide on Nginx Monitoring In-Reply-To: References: Message-ID: <45aaf3d07fe780a1222edb5eb529c81f.NginxMailingListEnglish@forum.nginx.org> Hi Noah, thanx for your guides; interesting read. for everyone else: there bis a nagios-plguin to monitor the stub/status - outputs: https://bitbucket.org/maresystem/dogtown-nagios-plugins/overview beside monitoring it also extracts all date from the status page and returns them as performance-data for graphing and as sources for warning/critival - notifications Performancedata: NginxStatus.Check OK | ac=1;acc=64; han=64; req=64; err=0; rpc=1; rps=0; cps=0; dreq=1; dcon=1; read=0; writ=1; wait=0; ct=6ms; ac -> active connections acc -> totally accepted connections han -> totally handled connections req -> total requests err -> diff between acc - han, thus errors rpc -> requests per connection (req/han) rps -> requests per second (calculated) from last checkrun vs actual values cps -> connections per (calculated) from last checkrun vs actual values dreq -> request-delta from last checkrun vs actual values dcon -> accepted-connection-delta from last checkrun vs actual values read -> reading requests from clients writ -> reading request body, processes request, or writes response to a client wait -> keep-alive connections, actually it is ac - (read + writ) ct -> checktime (connection time) for this check cheers, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256634,256643#msg-256643 From nginx-forum at nginx.us Fri Feb 13 07:45:35 2015 From: nginx-forum at nginx.us (nginxuser100) Date: Fri, 13 Feb 2015 02:45:35 -0500 Subject: fastcgi_request_buffering directive for 1.6.2? Message-ID: <2d7d69ef26c171cad1a1d33993dd54ff.NginxMailingListEnglish@forum.nginx.org> Hi, how do I get a patch for the fastcgi_request_buffering directive support for nginx version 1.6.2 or any other version going forward? Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256644,256644#msg-256644 From mdounin at mdounin.ru Fri Feb 13 12:59:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Feb 2015 15:59:39 +0300 Subject: Nginx add location details to URL when we stop decoding URL In-Reply-To: <475983802d17c1639c85b9877d86b288.NginxMailingListEnglish@forum.nginx.org> References: <475983802d17c1639c85b9877d86b288.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150213125939.GA19012@mdounin.ru> Hello! On Fri, Feb 13, 2015 at 01:07:15AM -0500, malintha wrote: > I am accessing a URL which has encode characters > > http:....../malintha/tel%3A%2B6281808147137 > > > > location /gateway/ { > proxy_set_header X-Forwarded-Host $host; > proxy_set_header X-Forwarded-Server $host; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > proxy_read_timeout 5m; > proxy_send_timeout 5m; > proxy_pass http://10.1.1.1:9443$request_uri/; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "upgrade"; > proxy_http_version 1.1; > } > > I added $request_uri at then end of the proxy_pass URL as I have to stop > decoding by nginx. > > When I configure like this nginx resolve it to (stop decoding but incorrect > URL - adding /gateway/) > > /gateway/malintha/tel%3A%2B6281808147137 What makes you think that nginx is _adding_ "/gateway/"? As per the location specification, the $request_uri is expected to contain "/gateway/" in it. > but When I remove $request_uri it resolve to correct URL (but with > decoding) With $request_uri in proxy_pass nginx will assume you've specified full URI yourself and it shouldn't be changed. When you remove $request_uri nginx will follow it's normal logic to replace matching part of $uri with the URI part specified in the proxy_pass directive, and will encode the result as appropriate. See http://nginx.org/r/proxy_pass for details. If you want nginx to preserve the encoding as it was in the original client request, and want to strip "/gateway/" part from the URI at the same time, you may do so by manually removing the "/gateway/" from the $request_uri variable, like this (untested): set $modified_uri $request_uri; if ($modified_uri ~ "^/gateway(/.*)") { set $modified_uri $1; } proxy_pass http://upstream$modified_uri; Note though, that this will not work in some characters in the "/gateway/" are escaped by the client. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Fri Feb 13 13:01:39 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Feb 2015 16:01:39 +0300 Subject: keep-alive confused In-Reply-To: <20150213065538.E7A7A5B012B@webmail.sinamail.sina.com.cn> References: <20150213065538.E7A7A5B012B@webmail.sinamail.sina.com.cn> Message-ID: <20150213130139.GB19012@mdounin.ru> Hello! On Fri, Feb 13, 2015 at 02:55:38PM +0800, kevin_wangxk at sina.cn wrote: > hello, > > I use nginx as a reverse proxy, and I find a question that confused me. > > In upstream loction I use keepalive direction to keep a connection pool. As everyone knows, > that can reduse tcp handshake cost. When I send request with "Connection: keep-alive" header > using http /1.0 protocal to nginx, I find nginx transmits the request to backend with "Connection: close" > header. You know, "Connection: close" header will lead backend close the connection, and keepalive > direction cannt work. > > for example below: > //////////////////////////////////////////////////////////////////////////////////////////////////////////////// > GET /2/proxyauth/auth.json?source=84883&&access_token=2.00JGc2QCCPl5PDe1e566dDfAD&object_id=1022:100101B2094650D064A0F5469D&uid=1042:2673106753&auth_type=option HTTP/1.0 > Connection: keep-alive > Client-Uri: /2/proxy/darwin/movie/mblog_list.json?source=848343583&&access_token=2.00JGc2QCCPl5PDe1e566d3b9NzDfAD&object_id=1022:100101B2094650D064A0F5469D&uid=1042:2673106753 > Client-Host: localhost > Host: oauthweibo > User-Agent: curl/7.22.0 (x86_64-pc-linux-gnu) libcurl/7.22.0 OpenSSL/1.0.1 zlib/1.2.3.4 libidn/1.23 librtmp/2.3 > Accept: */* > > > HTTP/1.1 200 OK > Server: nginx/1.6.2 > Date: Fri, 13 Feb 2015 06:47:39 GMT > Content-Type: application/json;charset=UTF-8 > Connection: close > Api-Server-IP: 10.75.25.127 > Vary: Accept-Encoding > ///////////////////////////////////////////////////////////////////////////////////// > > Anybody meet that problem? Ideas anyone? thanks very much. > > When nginx transmits requests to backend, it uses http /1.0 protocal. Why is that? http://nginx.org/r/keepalive -- Maxim Dounin http://nginx.org/ From black.fledermaus at arcor.de Fri Feb 13 13:20:29 2015 From: black.fledermaus at arcor.de (basti) Date: Fri, 13 Feb 2015 14:20:29 +0100 Subject: Fwd: Deny access to subfolder/files In-Reply-To: <54DDF92D.90706@unix-solution.de> References: <54DDF92D.90706@unix-solution.de> Message-ID: <54DDFA1D.9030509@arcor.de> Hello, i have a URL like https://example.com/foo/doc/bar/filename.txt I want to deny access to all files and folders in /doc/... and try location ~ ^/foo/(doc|etc|lib|log|sbin|var/cache|var/lib|var/log)/ { deny all; } I does not work, i can download the file above. How can please help? Thanks! From francis at daoine.org Fri Feb 13 17:33:59 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 13 Feb 2015 17:33:59 +0000 Subject: Fwd: Deny access to subfolder/files In-Reply-To: <54DDFA1D.9030509@arcor.de> References: <54DDF92D.90706@unix-solution.de> <54DDFA1D.9030509@arcor.de> Message-ID: <20150213173359.GI13461@daoine.org> On Fri, Feb 13, 2015 at 02:20:29PM +0100, basti wrote: > https://example.com/foo/doc/bar/filename.txt > location ~ ^/foo/(doc|etc|lib|log|sbin|var/cache|var/lib|var/log)/ { > deny all; > } http://nginx.org/r/location One request is handled in one location. Which one location in your config will handle the request /foo/doc/bar/filename.txt ? If it is the one you show, you will get http 403. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Feb 13 19:10:10 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 13 Feb 2015 14:10:10 -0500 Subject: Fwd: Deny access to subfolder/files In-Reply-To: <54DDFA1D.9030509@arcor.de> References: <54DDFA1D.9030509@arcor.de> Message-ID: basti Wrote: ------------------------------------------------------- > Hello, > > i have a URL like > > https://example.com/foo/doc/bar/filename.txt > > I want to deny access to all files and folders in /doc/... > and try > location ~ ^/foo/(doc|etc|lib|log|sbin|var/cache|var/lib|var/log)/ { > deny all; > } Maybe something like: map $uri $locn { default 0; ~*/foo/ 1; ~*/doc/ 1; } location / { if ($locn) { return 404; } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256652,256663#msg-256663 From matthew.phelan at gawker.com Fri Feb 13 19:34:29 2015 From: matthew.phelan at gawker.com (Matthew Phelan) Date: Fri, 13 Feb 2015 14:34:29 -0500 Subject: Reporter looking to talk about the Silk Road Case / Special Agent Chris Tarbell Message-ID: Hey all, esteemed members of this Nginx mailing list. I'm a freelance reporter (former Onion headline writer and former chemical engineer) trying to gather some kind of technical consensus on a part of the Silk Road pretrial that seems to have become mired in needless ambiguity. Specifically, the prosecution's explanation for how they were able to locate the Silk Road's Icelandic server IP address. You may have seen Australian hacker Nik Cubrilovic's long piece on how it, at least, appears that the government has submitted a deeply implausible scenario for how they came to locate the Silk Road server. Or Bruce Scheiener's comments . Or someone else's. (The court records are hyperlinked in the article, but they can be found here and here , if you'd rather peruse them without Nik's logic prejudicing your own opinion. In addition, here 's the opinion of defendant Ross Ulbricht's lawyer Josh Horowitz, himself a technical expert in this field, wherein he echoes Nik Cubrilovic's critical interpretation of the state's discovery disclosures.) I'm hoping that your collective area of expertise in Nginx might allow some of you to comment on this portion of the case, ideally on the record, for an article I'm working on. My goal is to amass many expert opinions on this. It seems like a very open and shut case that beat reporters covering it last October gave a little too much "He said. She said."-style false equivalency. I know this is a cold call. PLEASED TO MEET YOU! *Here, below, is the main question, I believe:* This portion of the defense's expert criticism of the prosecution's testimony from former SA Chris Tarbell (at least) appears the most clear cut and definitive: ? 7. Without identification by the Government, it was impossible to pinpoint the 19 lines in the access logs showing the date and time of law enforcement access to the .49 server. 23. The ?live-ssl? configuration controls access to the market data contained on the .49 server. This is evident from the configuration line: root /var/www/market/public which tells the Nginx web server that the folder ?public? contains the website content to load when visitors access the site. 24. The critical configuration lines from the live-ssl file are: allow 127.0.0.1; allow 62.75.246.20; deny all; These lines tell the web server to allow access from IP addresses 127.0.0.1 and 65.75.246.20, and to deny all other IP addresses from connecting to the web server. IP address 127.0.0.1 is commonly referred to in computer networking as ?localhost? i.e., the machine itself, which would allow the server to connect to itself. 65.75.246.20, as discussed ante, is the IP address for the front-end server, which must be permitted to access the back-end server. The ?deny all? line tells the web server to deny connections from any IP address for which there is no specific exception provided. 25. Based on this configuration, it would have been impossible for Special Agent Tarbell to access the portion of the .49 server containing the Silk Road market data, including a portion of the login page, simply by entering the IP address of the server in his browser. Does it seem like the defense is making a reasonably sound argument here? Are there any glaring holes in their reasoning to you? Etc.? (I would gladly rather have an answer to this that is filled with qualifiers and hedges than no answer at all, and as such, hereby promise that I will felicitously include all those qualifiers and hedges when quoting.) Any other observations on this pre-trail debate would also be welcome. Thanks for your time, very, very, sincerely. Best Regards, Matthew -- *Matthew D. Phelan* "editorial contractor" *Black Bag ? Gawker * @CBMDP // twitter 917.859.1266 // cellular telephone matthew.phelan at gawker.com // PGP Public Key // email -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri Feb 13 21:05:25 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 13 Feb 2015 22:05:25 +0100 Subject: Reporter looking to talk about the Silk Road Case / Special Agent Chris Tarbell In-Reply-To: References: Message-ID: Partial information = partial answer. I do not know the case so maybe questions I will ask have obvious answers. The only way to understand how the backend server behaves is to see its whole configuration, namely 'Exhibit 6' which I cannot seem to find. Do you have a direct link to it? It would also be interesting to know where the agent attempted to connect *from*. If he already had access to the front-end server through comprimission, he could then initiate connections from there successfully. Is it said he managed to connect to that backend directly from outside the infrastructure? That looks improbable to me since I consider people behind such activities hiding on Tor network know what they are doing and are most probably paranoid. --- *B. R.* On Fri, Feb 13, 2015 at 8:34 PM, Matthew Phelan wrote: > Hey all, esteemed members of this Nginx mailing list. > > > I'm a freelance reporter (former Onion headline writer and former chemical > engineer) trying to gather some kind of technical consensus on a part of > the Silk Road pretrial that seems to have become mired in needless > ambiguity. Specifically, the prosecution's explanation for how they were > able to locate the Silk Road's Icelandic server IP address. > > You may have seen Australian hacker Nik Cubrilovic's long piece > on > how it, at least, appears that the government has submitted a deeply > implausible scenario for how they came to locate the Silk Road server. Or Bruce > Scheiener's comments > . Or > someone else's. (The court records are hyperlinked in the article, but they > can be found here > > and here > , > if you'd rather peruse them without Nik's logic prejudicing your own > opinion. In addition, here > 's > the opinion of defendant Ross Ulbricht's lawyer Josh Horowitz, himself a > technical expert in this field, wherein he echoes Nik Cubrilovic's critical > interpretation of the state's discovery disclosures.) > > I'm hoping that your collective area of expertise in Nginx might allow > some of you to comment on this portion of the case, ideally on the record, > for an article I'm working on. > > My goal is to amass many expert opinions on this. It seems like a very > open and shut case that beat reporters covering it last October gave a > little too much "He said. She said."-style false equivalency. > > I know this is a cold call. PLEASED TO MEET YOU! > > *Here, below, is the main question, I believe:* > > This portion of the defense's expert criticism > > of the prosecution's testimony from former SA Chris Tarbell > > (at least) appears the most clear cut and definitive: > > ? 7. Without identification by the Government, it was impossible to > pinpoint the 19 lines in the access logs showing the date and time of law > enforcement access to the .49 server. > > 23. The ?live-ssl? configuration controls access to the market data > contained on the .49 server. This is evident from the configuration line: > root /var/www/market/public > which tells the Nginx web server that the folder ?public? contains the > website content to load when visitors access the site. > > 24. The critical configuration lines from the live-ssl file are: > allow 127.0.0.1; > allow 62.75.246.20; > deny all; > These lines tell the web server to allow access from IP addresses > 127.0.0.1 and 65.75.246.20, and to deny all other IP addresses from > connecting to the web server. IP address 127.0.0.1 is commonly referred to > in computer networking as ?localhost? i.e., the machine itself, which would > allow the server to connect to itself. 65.75.246.20, as discussed ante, is > the IP address for the front-end server, which must be permitted to access > the back-end server. The ?deny all? line tells the web server to deny > connections from any IP address for which there is no specific exception > provided. > > 25. Based on this configuration, it would have been impossible for Special > Agent Tarbell to access the portion of the .49 server containing the Silk > Road market data, including a portion of the login page, simply by entering > the IP address of the server in his browser. > > Does it seem like the defense is making a reasonably sound argument here? > Are there any glaring holes in their reasoning to you? Etc.? (I would > gladly rather have an answer to this that is filled with qualifiers and > hedges than no answer at all, and as such, hereby promise that I will > felicitously include all those qualifiers and hedges when quoting.) > > Any other observations on this pre-trail debate would also be welcome. > > Thanks for your time, very, very, sincerely. > > Best Regards, > Matthew > -- > > *Matthew D. Phelan* > "editorial contractor" > > *Black Bag ? Gawker * > @CBMDP // twitter > 917.859.1266 // cellular telephone > matthew.phelan at gawker.com // PGP Public Key > // email > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marius.kjeldahl at gmail.com Fri Feb 13 21:11:36 2015 From: marius.kjeldahl at gmail.com (Marius Kjeldahl) Date: Fri, 13 Feb 2015 22:11:36 +0100 Subject: Proxying websockets, connections sometimes being dropped Message-ID: I've been investigating a fairly infrequent issue related to websockets being dropped and attempted downgraded to long-polling when talking to a cometd server (a proprietary java server with jetty embedded). To simulate the observed behaviour from our browser clients, I've written my own test script which simply does a connect/subscribe/disconnect in sequence, one at a time. When run through nginx, after a short while (seconds), the event stream stops, the client tries a reconnect using long-poll which also fails, before a successful reconnect is completed using websockets and the responses are fast for a few seconds again. When I run the same client without proxying through nginx, connecting straight to the server, I have no such issues. It never drops any connections and responses are always fast. Considering I'm only doing one request at a time I believe it really shouldn't be necessary to tweak the default nginx settings, but I have done it anyway. Increasing various connection and timeout related limits, both inside nginx and the linux os, seems to make the stoppages less frequent, but they still happen fairly frequently (at least compared to outside nginx where it never happens). Another weird observation is that if I start another client on the same machine shortly after the original client stops, the new client still runs fast for a few seconds before encountering similar issues. If nginx was out of some resources I would expect the second client to more or less immediately stop, but it does not seem to happen. Despite this, I do believe I am hitting issues related to nginx, possibly running out of resources because nginx behaves differently than the server it proxies for with regards to not freeing sockets or similar. I've also looked at the number of listening sockets and sockets being kept around after close etc (the normal "server tuning stuff"), but I'm not finding any significant numbers. I am looking for advice and/or pointers on how to avoid this issue (again, keeping in mind this is testing ONE client doing sequential requests). Any help would be appreciated. Thanks, Marius K. From matthew.phelan at gawker.com Fri Feb 13 21:21:09 2015 From: matthew.phelan at gawker.com (Matthew Phelan) Date: Fri, 13 Feb 2015 16:21:09 -0500 Subject: Reporter looking to talk about the Silk Road Case / Special Agent Chris Tarbell In-Reply-To: References: Message-ID: Thanks for the interest, B.R. *---The only way to understand how the backend server behaves is to see its whole configuration, namely 'Exhibit 6' which I cannot seem to find. */* Do you have a direct link to it?* Sadly, no. Here , you will find a torrent to "all the evidentiary exhibits" introduced during the trial of Ross Ulbricht . Exhibit 6 should be in that torrent somewhere. *It would also be interesting to know where the agent attempted to connect from. If he already had access to the front-end server through comprimission, he could then initiate connections from there successfully.* *Is it said he managed to connect to that backend directly from outside the infrastructure?* I may be wrong, but my recollection is that "Yes" it has been said that Tarbell managed to connect from outside the infrastructure. This is perhaps why certain commentators have found the Tarbell declaration implausible. --- Best, Matthew On Fri, Feb 13, 2015 at 4:05 PM, B.R. wrote: > Partial information = partial answer. > > I do not know the case so maybe questions I will ask have obvious answers. > > The only way to understand how the backend server behaves is to see its > whole configuration, namely 'Exhibit 6' which I cannot seem to find. > Do you have a direct link to it? > > It would also be interesting to know where the agent attempted to connect > *from*. If he already had access to the front-end server through > comprimission, he could then initiate connections from there successfully. > Is it said he managed to connect to that backend directly from outside the > infrastructure? That looks improbable to me since I consider people behind > such activities hiding on Tor network know what they are doing and are most > probably paranoid. > --- > *B. R.* > > On Fri, Feb 13, 2015 at 8:34 PM, Matthew Phelan > wrote: > >> Hey all, esteemed members of this Nginx mailing list. >> >> >> I'm a freelance reporter (former Onion headline writer and former >> chemical engineer) trying to gather some kind of technical consensus on a >> part of the Silk Road pretrial that seems to have become mired in needless >> ambiguity. Specifically, the prosecution's explanation for how they were >> able to locate the Silk Road's Icelandic server IP address. >> >> You may have seen Australian hacker Nik Cubrilovic's long piece >> on >> how it, at least, appears that the government has submitted a deeply >> implausible scenario for how they came to locate the Silk Road server. Or Bruce >> Scheiener's comments >> . >> Or someone else's. (The court records are hyperlinked in the article, but >> they can be found here >> >> and here >> , >> if you'd rather peruse them without Nik's logic prejudicing your own >> opinion. In addition, here >> 's >> the opinion of defendant Ross Ulbricht's lawyer Josh Horowitz, himself a >> technical expert in this field, wherein he echoes Nik Cubrilovic's critical >> interpretation of the state's discovery disclosures.) >> >> I'm hoping that your collective area of expertise in Nginx might allow >> some of you to comment on this portion of the case, ideally on the record, >> for an article I'm working on. >> >> My goal is to amass many expert opinions on this. It seems like a very >> open and shut case that beat reporters covering it last October gave a >> little too much "He said. She said."-style false equivalency. >> >> I know this is a cold call. PLEASED TO MEET YOU! >> >> *Here, below, is the main question, I believe:* >> >> This portion of the defense's expert criticism >> >> of the prosecution's testimony from former SA Chris Tarbell >> >> (at least) appears the most clear cut and definitive: >> >> ? 7. Without identification by the Government, it was impossible to >> pinpoint the 19 lines in the access logs showing the date and time of law >> enforcement access to the .49 server. >> >> 23. The ?live-ssl? configuration controls access to the market data >> contained on the .49 server. This is evident from the configuration line: >> root /var/www/market/public >> which tells the Nginx web server that the folder ?public? contains the >> website content to load when visitors access the site. >> >> 24. The critical configuration lines from the live-ssl file are: >> allow 127.0.0.1; >> allow 62.75.246.20; >> deny all; >> These lines tell the web server to allow access from IP addresses >> 127.0.0.1 and 65.75.246.20, and to deny all other IP addresses from >> connecting to the web server. IP address 127.0.0.1 is commonly referred to >> in computer networking as ?localhost? i.e., the machine itself, which would >> allow the server to connect to itself. 65.75.246.20, as discussed ante, is >> the IP address for the front-end server, which must be permitted to access >> the back-end server. The ?deny all? line tells the web server to deny >> connections from any IP address for which there is no specific exception >> provided. >> >> 25. Based on this configuration, it would have been impossible for >> Special Agent Tarbell to access the portion of the .49 server containing >> the Silk Road market data, including a portion of the login page, simply by >> entering the IP address of the server in his browser. >> >> Does it seem like the defense is making a reasonably sound argument here? >> Are there any glaring holes in their reasoning to you? Etc.? (I would >> gladly rather have an answer to this that is filled with qualifiers and >> hedges than no answer at all, and as such, hereby promise that I will >> felicitously include all those qualifiers and hedges when quoting.) >> >> Any other observations on this pre-trail debate would also be welcome. >> >> Thanks for your time, very, very, sincerely. >> >> Best Regards, >> Matthew >> -- >> >> *Matthew D. Phelan* >> "editorial contractor" >> >> *Black Bag ? Gawker * >> @CBMDP // twitter >> 917.859.1266 // cellular telephone >> matthew.phelan at gawker.com // PGP Public Key >> // email >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Feb 14 06:22:40 2015 From: nginx-forum at nginx.us (malintha) Date: Sat, 14 Feb 2015 01:22:40 -0500 Subject: Nginx add location details to URL when we stop decoding URL In-Reply-To: <20150213125939.GA19012@mdounin.ru> References: <20150213125939.GA19012@mdounin.ru> Message-ID: hi Maxim , Your answer is right and thank you for it. I applied it with small correction. set $modified_uri $request_uri; if ($modified_uri ~ "^/gateway(/.*)") { set $modified_uri $1; } proxy_pass http://upstream$1; Thank you, Malintha Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256639,256669#msg-256669 From mat999 at gmail.com Sat Feb 14 14:35:42 2015 From: mat999 at gmail.com (SplitIce) Date: Sun, 15 Feb 2015 01:35:42 +1100 Subject: Google dumps SPDY in favour of HTTP/2, any plans for nginx? In-Reply-To: References: Message-ID: Indeed. The Wikipedia page covers it quite well FYI - http://en.wikipedia.org/wiki/HTTP/2 So what is really being asked is for a roadmap for the implementation of the non-draft differences (i.e HTTP/2.0 allows for non TLS communication, and multiplexes differently). I am sure nginx will once again be at the forefront of technology and implement it when possible. :) On Wed, Feb 11, 2015 at 11:38 AM, Ilya Grigorik wrote: > Pedantic, but I object to the wording in the title :) ... SPDY was/is an > experimental branch of HTTP/2, and now that HTTP/2 is in the final stages > of becoming a standard, there is no longer the need for SPDY and hence the > announcement of a deprecation timeline -- it's not and never was SPDY vs. > HTTP/2. That aside... > > From what I understand (at least from a few conversations at nginx.conf), > there is already some existing efforts around enabling http/2 support? I'd > love to see some official product plans and/or timelines as well. > > ig > > On Tue, Feb 10, 2015 at 3:25 PM, mex wrote: > >> Google dumps SPDY in favour of HTTP/2, any plans ore roadmap for HTTP/2 in >> nginx? >> >> >> see >> https://blog.chromium.org/2015/02/hello-http2-goodbye-spdy-http-is_9.html >> >> "HTTP is the fundamental networking protocol that powers the web. The >> majority of sites use version 1.1 of HTTP, which was defined in 1999 with >> RFC2616. A lot has changed on the web since then, and a new version of the >> protocol named HTTP/2 is well on the road to standardization. We plan to >> gradually roll out support for HTTP/2 in Chrome 40 in the upcoming weeks. >> >> HTTP/2?s primary changes from HTTP/1.1 focus on improved performance. Some >> key features such as multiplexing, header compression, prioritization and >> protocol negotiation evolved from work done in an earlier open, but >> non-standard protocol named SPDY. Chrome has supported SPDY since Chrome >> 6, >> but since most of the benefits are present in HTTP/2, it?s time to say >> goodbye. We plan to remove support for SPDY in early 2016, and to also >> remove support for the TLS extension named NPN in favor of ALPN in Chrome >> at >> the same time. Server developers are strongly encouraged to move to HTTP/2 >> and ALPN." >> >> >> >> cheers, >> >> mex >> >> Posted at Nginx Forum: >> http://forum.nginx.org/read.php?2,256561,256561#msg-256561 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Sat Feb 14 20:23:25 2015 From: hobson42 at gmail.com (Ian) Date: Sat, 14 Feb 2015 20:23:25 +0000 Subject: Converting POST into GET Message-ID: <54DFAEBD.4090101@gmail.com> Hi, I need (as part of a cache busting exercise) to convert a POST into a GET with the same URI parameters. ( The URL is currently a GET, but a proxy I have to subvert is mangling things :( There is no true POST data). Is there an Nginx configuration that would do this? I tried setting |$request_method but that caused a configuration error. Thanks Ian| -- Ian Hobson Mid Auchentiber, Auchentiber, Kilwinning, North Ayrshire KA13 7RR Tel: 0203 287 1392 Preparing eBooks for Kindle and ePub formats to give the best reader experience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacklinkers at gmail.com Sat Feb 14 21:57:26 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Sat, 14 Feb 2015 22:57:26 +0100 Subject: 301 redirect Message-ID: Hello, I need to redirect some URLs after redesigning my website. I use a 301 redirect for HTTP to HTTPS protocole : if ($scheme = "http") { return 301 https://$server_name$request_uri; } But how do I redirect URLs that have been changed ? ie. https://mywebsite.com/oldname.html to https://mywebsite.com/newname.html I did try if ( $request_filename ~ oldname.html/ ) { rewrite ^ https://mywebsite.com/newname.html/? permanent; } But this doesn't work. My website is hosted on a server with Direct Admin runing on a CentOS with Nginx as webserver. Don't know if it helps. Thanks in advance for help -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Sat Feb 14 22:18:20 2015 From: hobson42 at gmail.com (Ian) Date: Sat, 14 Feb 2015 22:18:20 +0000 Subject: 301 redirect In-Reply-To: References: Message-ID: <54DFC9AC.10603@gmail.com> On 14/02/2015 21:57, JACK LINKERS wrote: > Hello, > > I need to redirect some URLs after redesigning my website. > I use a 301 redirect for HTTP to HTTPS protocole : > > if ($scheme = "http") { > return 301 https://$server_name$request_uri; > } > > But how do I redirect URLs that have been changed ? > ie. https://mywebsite.com/oldname.html to > https://mywebsite.com/newname.html > > I did try > > if ( $request_filename ~ oldname.html/ ) { > rewrite ^ https://mywebsite.com/newname.html/? permanent; > } > > But this doesn't work. > > My website is hosted on a server with Direct Admin runing on a CentOS > with Nginx as webserver. Don't know if it helps. > > Thanks in advance for help > Hi Jack, I'm no nginx expert, but I think you should use location instead of if, its faster and less prone to create other errors. I think you can also use 301 redirects. I would proceed to set up lots of location clauses thus: location = /oldurl$ { return 301 https://$server_name/newurl; } Hope this works for you Ian -- Ian Hobson Mid Auchentiber, Auchentiber, Kilwinning, North Ayrshire KA13 7RR Tel: 0203 287 1392 Preparing eBooks for Kindle and ePub formats to give the best reader experience. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Feb 14 22:22:20 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 14 Feb 2015 22:22:20 +0000 Subject: 301 redirect In-Reply-To: References: Message-ID: <20150214222220.GJ13461@daoine.org> On Sat, Feb 14, 2015 at 10:57:26PM +0100, JACK LINKERS wrote: > But how do I redirect URLs that have been changed ? > ie. https://mywebsite.com/oldname.html to https://mywebsite.com/newname.html location = /oldname.html { return 301 /newname.html; } > I did try > > if ( $request_filename ~ oldname.html/ ) { > rewrite ^ https://mywebsite.com/newname.html/? permanent; > } > > But this doesn't work. Yes, it does. If your incoming request matches the string "oldname.html/". It just isn't a very good way of implementing it. f -- Francis Daly francis at daoine.org From jacklinkers at gmail.com Sat Feb 14 22:30:19 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Sat, 14 Feb 2015 23:30:19 +0100 Subject: 301 redirect In-Reply-To: <20150214222220.GJ13461@daoine.org> References: <20150214222220.GJ13461@daoine.org> Message-ID: Hi Francis, Thanks for your input. What would be the best ways doing it then ? (I forgot to mention there is a large amount of URLs : +/- 20) Is this a good way ? : map $old $new { oldlink.html newlink.com oldink2.html newlink2.html } location $old { return 301 $scheme://$host$new; } If not, could you show me an example ? Thanks in advance 2015-02-14 23:22 GMT+01:00 Francis Daly : > On Sat, Feb 14, 2015 at 10:57:26PM +0100, JACK LINKERS wrote: > > > But how do I redirect URLs that have been changed ? > > ie. https://mywebsite.com/oldname.html to > https://mywebsite.com/newname.html > > location = /oldname.html { return 301 /newname.html; } > > > I did try > > > > if ( $request_filename ~ oldname.html/ ) { > > rewrite ^ https://mywebsite.com/newname.html/? permanent; > > } > > > > But this doesn't work. > > Yes, it does. If your incoming request matches the string "oldname.html/". > > It just isn't a very good way of implementing it. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Feb 14 22:37:53 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 14 Feb 2015 22:37:53 +0000 Subject: 301 redirect In-Reply-To: References: <20150214222220.GJ13461@daoine.org> Message-ID: <20150214223753.GK13461@daoine.org> On Sat, Feb 14, 2015 at 11:30:19PM +0100, JACK LINKERS wrote: Hi there, > Thanks for your input. What would be the best ways doing it then ? > (I forgot to mention there is a large amount of URLs : +/- 20) A bunch of lines like location = /oldname.html { return 301 /newname.html; } (It's the "if" that isn't the good way of implementing it.) > Is this a good way ? : No. I'd say just use "location =". Good luck with it, f -- Francis Daly francis at daoine.org From jacklinkers at gmail.com Sat Feb 14 22:40:42 2015 From: jacklinkers at gmail.com (JACK LINKERS) Date: Sat, 14 Feb 2015 23:40:42 +0100 Subject: 301 redirect In-Reply-To: <20150214223753.GK13461@daoine.org> References: <20150214222220.GJ13461@daoine.org> <20150214223753.GK13461@daoine.org> Message-ID: Ok, thanks ! 2015-02-14 23:37 GMT+01:00 Francis Daly : > On Sat, Feb 14, 2015 at 11:30:19PM +0100, JACK LINKERS wrote: > > Hi there, > > > Thanks for your input. What would be the best ways doing it then ? > > (I forgot to mention there is a large amount of URLs : +/- 20) > > A bunch of lines like > > location = /oldname.html { return 301 /newname.html; } > > (It's the "if" that isn't the good way of implementing it.) > > > Is this a good way ? : > > No. > > I'd say just use "location =". > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 15 01:48:27 2015 From: nginx-forum at nginx.us (apexlir) Date: Sat, 14 Feb 2015 20:48:27 -0500 Subject: BoringSSL build issue Message-ID: <45d1b1f34352d6094f506b9e89a0c5f5.NginxMailingListEnglish@forum.nginx.org> Hello, I get the following error when I try to build nginx 1.7.10 against boringssl latest revision : cc -c -pipe -O -W -Wall -Wpointer-arith -Wno-unused-parameter -Werror -g -I /usr/local/boringssl/include -I/usr/include -I src/core -I src/event -I src/event/modules -I src/os/unix -I objs \ -o objs/src/event/ngx_event_openssl.o \ src/event/ngx_event_openssl.c src/event/ngx_event_openssl.c: In function ?ngx_ssl_connection_error?: src/event/ngx_event_openssl.c:1896:21: error: ?SSL_R_BLOCK_CIPHER_PAD_IS_WRONG? undeclared (first use in this function) src/event/ngx_event_openssl.c:1896:21: note: each undeclared identifier is reported only once for each function it appears in I've fixed the build by removing l:1896 from ngx_event_openssl.c but I don't know if it's recommended ... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256690,256690#msg-256690 From luky-37 at hotmail.com Sun Feb 15 11:25:05 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 15 Feb 2015 12:25:05 +0100 Subject: BoringSSL build issue In-Reply-To: <45d1b1f34352d6094f506b9e89a0c5f5.NginxMailingListEnglish@forum.nginx.org> References: <45d1b1f34352d6094f506b9e89a0c5f5.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Hello, > > I get the following error when I try to build nginx 1.7.10 against boringssl > latest revision : What do you mean by latest revision? Latest 2.1.3 or a the current git tree on github, or cloned from CVS? I don't really see how this could happen, libressl didn't remove this definition. Lukas From dhbehkadeh at gmail.com Sun Feb 15 11:36:08 2015 From: dhbehkadeh at gmail.com (=?UTF-8?B?2K/Yp9mI2K8g2YXYtNmH2K/bjA==?=) Date: Sun, 15 Feb 2015 15:06:08 +0330 Subject: No subject Message-ID: -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Feb 15 14:02:05 2015 From: nginx-forum at nginx.us (dansch8888) Date: Sun, 15 Feb 2015 09:02:05 -0500 Subject: rewrite rules cms phpwcms not working Message-ID: <0bcb435c8eb98912a2d073c38c0d31e9.NginxMailingListEnglish@forum.nginx.org> I try to convert apache rewrite rules for CMS phpwcms (www.phpwcms.de), but with the online converter tools and some adjustments recommended at the nginx wiki I still struggle with that. That's my configuration **Server** Debian Whezzy nginx v 1.6.2 php5-fpm 5.5.20-1~dotdeb.1 (fpm-fcgi) **Rewrite Rule Apache** RewriteEngine on RewriteBase / RewriteRule ^ (track|include|img|template|picture|filearchive|content|robots\.txt|favicon\.ico)($|/) - [L] RewriteRule ^ index\.php$ - [L] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteRule ^ ([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)\.html$ /index.php?id=$1,$2,$3,$4,$5,$6&%{QUERY_STRING} RewriteRule ^ (.+)\.html$ /index.php?$1&%{QUERY_STRING} Rewrite should do for example this: http://hometest.home.local/home_de.html -> http://hometest.home.local/index.php?home_de CMS without rewrite works fine. Tests with the converted rewrite rules where not working and more confused then rewriting with the try_files option. So I switched to test with try_files. Examples for wordpress or drupal where not working too. Now I'm at this stage. I believe the main problem is the "QUERY_STRING", but I have no idea how that get rendered. **Nginx Site Config /etc/nginx/sites-available/default :** listen 80 default_server; server_name hometest.home.local; root /data/www/vhosts/default/public_html; location / { try_files $uri /index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } rewrite_log on; access_log /data/www/vhosts/default/logs/access.log; error_log /data/www/vhosts/default/logs/error.log debug; } That's what the logfile shows. **Nginx Error Log:** [debug] 6102#0: *1 http run request: "/index.php?" [debug] 6102#0: *1 http upstream check client, write event:1, "/index.php" [debug] 6102#0: *1 http upstream recv(): -1 (11: Resource temporarily unavailable) ... [error] 6102#0: *1 FastCGI sent in stderr: "PHP message: PHP Notice: Use of undefined constant Y - assumed 'Y' in /data/www/vhosts/default/public_html/include/inc_front/front.func.inc.php(2287) : eval()'d code on line 1 ** front.func.inc.php(2287)** function include_int_phpcode($string) { // return the PHP code $s = html_despecialchars($string[1]); $s = str_replace('
', "\n", $s); $s = str_replace('
', "\n", $s); ob_start(); eval($s.";"); return ob_get_clean(); } I hope someone can help me with this. The phpcwms forum couldn't give me an answer so fare. Thank you Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256693,256693#msg-256693 From r at roze.lv Sun Feb 15 15:14:43 2015 From: r at roze.lv (Reinis Rozitis) Date: Sun, 15 Feb 2015 17:14:43 +0200 Subject: Converting POST into GET In-Reply-To: <54DFAEBD.4090101@gmail.com> References: <54DFAEBD.4090101@gmail.com> Message-ID: <326119A0D1194ACBA69A78796C50F481@MezhRoze> > I need (as part of a cache busting exercise) to convert a POST into a GET > with the same URI parameters. ( The URL is currently a GET, but a proxy I > have to subvert is mangling things :( There is no true POST data). > Is there an Nginx configuration that would do this? I tried setting > $request_method but that caused a configuration error. It's a bit unclear what you mean by "I have to subvert is mangling things" but you can change the method for proxied requests with proxy_method ( http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_method ) rr From nginx-forum at nginx.us Sun Feb 15 16:19:31 2015 From: nginx-forum at nginx.us (apexlir) Date: Sun, 15 Feb 2015 11:19:31 -0500 Subject: BoringSSL build issue In-Reply-To: References: Message-ID: https://boringssl.googlesource.com/boringssl They didn't release yet so I just cloned the repo ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256690,256695#msg-256695 From luky-37 at hotmail.com Sun Feb 15 17:51:47 2015 From: luky-37 at hotmail.com (Lukas Tribus) Date: Sun, 15 Feb 2015 18:51:47 +0100 Subject: BoringSSL build issue In-Reply-To: References: , Message-ID: > https://boringssl.googlesource.com/boringssl > > They didn't release yet so I just cloned the repo ! Sorry, I was thinking about libressl instead. BoringSSL removed SSL_R_BLOCK_CIPHER_PAD_IS_WRONG return errors in commits 1e52ecac4d and 29b186736c, and the definition was finally removed in commit 689be0f4b7, which is what breaks the build here. You can workaround this by &ifdef'ing the line in ngx_event_openssl.c out, nothing bad will happen because of this. Not sure if it still makes sense to workaround every single build breakage caused by upstream boringssl changes in nginx, seems like a never ending cat and mouse game. Lukas From vozlt at vozlt.com Mon Feb 16 09:20:20 2015 From: vozlt at vozlt.com (=?UTF-8?B?6rmA7JiB7KO8?=) Date: Mon, 16 Feb 2015 18:20:20 +0900 (KST) Subject: Nginx virtual host traffic status module Message-ID: <312d0dc98f43345c2a571ed7218d5d2@cvweb04.wmail.nhnsystem.com> Hello, I've created an open source module similar to the live traffic monitoring of nginx plus. I've made it as C module, and it supports virtual hosts. If anyone tries it out, feedback is welcome. Note: I haven't tested it extensively in production environment yet. Project name: Nginx virtual host traffic status module Project link: https://github.com/vozlt/nginx-module-vts Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon Feb 16 18:57:23 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 16 Feb 2015 19:57:23 +0100 Subject: Nginx virtual host traffic status module In-Reply-To: <312d0dc98f43345c2a571ed7218d5d2@cvweb04.wmail.nhnsystem.com> References: <312d0dc98f43345c2a571ed7218d5d2@cvweb04.wmail.nhnsystem.com> Message-ID: Looks cool! It will need deeper review, of course, but if the behavior matches the display, that is promising. Thanks! --- *B. R.* On Mon, Feb 16, 2015 at 10:20 AM, ??? wrote: > Hello, > > > > I've created an open source module similar to the live traffic monitoring > of nginx plus. > > I've made it as C module, and it supports virtual hosts. > > If anyone tries it out, feedback is welcome. > > Note: I haven't tested it extensively in production environment yet. > > > > Project name: Nginx virtual host traffic status module > > Project link: https://github.com/vozlt/nginx-module-vts > > > > Thanks. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ben at indietorrent.org Mon Feb 16 19:55:51 2015 From: ben at indietorrent.org (Ben Johnson) Date: Mon, 16 Feb 2015 14:55:51 -0500 Subject: Nginx Upload Progress Module stops functioning once behind reverse-proxy Message-ID: <54E24B47.1080004@indietorrent.org> Hello, I've recently compiled nginx-1.7.10 with a third-party upload-progress tracking module, which is described at http://wiki.nginx.org/HttpUploadProgressModule . This module works perfectly well, until I attempt to put the entire setup behind a reverse-proxy. Below is a simplified configuration that demonstrates the issue I'm having. With this configuration, the upload progress reporting functions correctly *only* if I access the reverse-proxy port directly, at https://example.com:1337. As soon as I attempt to access the upstream host at https://example.com (on port 443, instead of the proxy port, 1337), the upload progress module always reports "starting" as its state (which, according to the documentation, implies "the upload request hasn't been registered yet or is unknown"). Once I have this working with the reverse-proxy in-place, I want to change the line "listen *:1337 ssl;" to "listen 127.0.0.1:1337 ssl;" so that user-agents cannot access the proxy directly. Might anyone know why this module ceases to work correctly once behind a reverse-proxy, as the below config demonstrates? Is there some proxy-specific configuration directive that I've overlooked? Thanks for any help! -Ben ################################################################### http { server { listen *:443 ssl; location ^~ / { proxy_pass https://127.0.0.1:1337; } } upload_progress uploads 5m; server { listen *:1337 ssl; location @virtual { include fastcgi_params; fastcgi_pass unix:/var/lib/php5-fpm/web1.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_intercept_errors on; } location = /progress/ { report_uploads uploads; upload_progress_json_output; } location /file-upload/progress/ { proxy_pass https://127.0.0.1:1337; track_uploads uploads 5s; } } } ################################################################### From francis at daoine.org Mon Feb 16 20:37:26 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 16 Feb 2015 20:37:26 +0000 Subject: rewrite rules cms phpwcms not working In-Reply-To: <0bcb435c8eb98912a2d073c38c0d31e9.NginxMailingListEnglish@forum.nginx.org> References: <0bcb435c8eb98912a2d073c38c0d31e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150216203726.GL13461@daoine.org> On Sun, Feb 15, 2015 at 09:02:05AM -0500, dansch8888 wrote: Hi there, I've not tried to use phpwcms. But from the apache config and your description, I think that the following provides some of the information to the php interpreter that it wants. > (track|include|img|template|picture|filearchive|content|robots\.txt|favicon\.ico)($|/) > - [L] I think that say "anything that matches that pattern should be served as a plain file, and not given to the php interpreter". But there are also .htaccess files in some of those directories. So, after you are happy that the thing you want to be working is working on your test system, add entries like location ^~ /track/ {} location ^~ /filearchive/ { deny all; } to your server{} block, according to what you want, before you make it public. > RewriteRule ^ index\.php$ - [L] > RewriteCond %{REQUEST_FILENAME} !-f > RewriteCond %{REQUEST_FILENAME} !-d > RewriteRule ^ > ([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)\.([0-9]+)\.html$ > /index.php?id=$1,$2,$3,$4,$5,$6&%{QUERY_STRING} > RewriteRule ^ (.+)\.html$ /index.php?$1&%{QUERY_STRING} > > Rewrite should do for example this: > http://hometest.home.local/home_de.html -> > http://hometest.home.local/index.php?home_de I think that the above rewrites to http://hometest.home.local/index.php?home_de& If the difference (the extra &) does not matter, then all is good. At http level, add map $request_uri $bit_of_qs { default ""; ~/(?P.*)\.html $name; } Then at server level include server { if ($request_uri ~ /(\d+)\.(\d+)\.(\d+)\.(\d+)\.(\d+)\.(\d+)\.html) { set $bit_of_qs "id=$1,$2,$3,$4,$5,$6"; } location / { try_files $uri $uri/ @index.php; } location ~ \.php$ { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location @index.php { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_param QUERY_STRING $bit_of_qs&$query_string; } } Depending on your fastcgi server, you may want the "include" line after the "fastcgi_param" lines. > Now I'm at this stage. I believe the main problem is the "QUERY_STRING", but > I have no idea how that get rendered. The above sets QUERY_STRING to what I think is the same thing that your apache config sets it to. > I hope someone can help me with this. The phpcwms forum couldn't give me an > answer so fare. I don't know if the above will let everything work -- I'd expect that it would not -- but it should push you in the right direction. (For example: it is not immediately clear to me what should happen to a request for /a/1.2.3.4.5.6.html -- probably the above configuration does not do what is wanted there.) Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Mon Feb 16 20:52:08 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 16 Feb 2015 20:52:08 +0000 Subject: Nginx Upload Progress Module stops functioning once behind reverse-proxy In-Reply-To: <54E24B47.1080004@indietorrent.org> References: <54E24B47.1080004@indietorrent.org> Message-ID: <20150216205208.GM13461@daoine.org> On Mon, Feb 16, 2015 at 02:55:51PM -0500, Ben Johnson wrote: Hi there, > I've recently compiled nginx-1.7.10 with a third-party upload-progress > tracking module, which is described at > http://wiki.nginx.org/HttpUploadProgressModule . Read the first two paragraphs on that page... > This module works perfectly well, until I attempt to put the entire > setup behind a reverse-proxy. ...and draw a picture of what happens when a client sends a 1MB upload to nginx, and nginx sends it via proxy_pass to an upstream. That should show you why it fails. > With this configuration, the upload progress reporting functions > correctly *only* if I access the reverse-proxy port directly, at > https://example.com:1337. Yes. That's what nginx does. Why does there need to be an upload_progress module in the first place, instead of letting the upstream server take care of it? In this case, you have put your upload_progress module on your upstream server. Not going to work. At least until you can use an "unbuffered upload". Which is not in current nginx. (But is in other things.) f -- Francis Daly francis at daoine.org From neubyr at gmail.com Tue Feb 17 20:34:45 2015 From: neubyr at gmail.com (neubyr) Date: Tue, 17 Feb 2015 12:34:45 -0800 Subject: ifdefine alternative Message-ID: I have existing apache configuration which updates it's allow-deny rules based on whether it's development or production system. For example, apache gets started with '-D ${HOST_ENVIRONMENT}' parameter which is then used in IFDefine block. I was wondering if Nginx has any options to use similar options. I thought about using perl modules to set variables based on environment variable, but I was wondering if there are any other options. Any ideas on alternative to -D/IfDefine will be helpful. ~ thanks, N -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Feb 17 21:45:16 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 17 Feb 2015 16:45:16 -0500 Subject: ifdefine alternative In-Reply-To: References: Message-ID: <3987b52059b0eac20fcc34dc2b2b148f.NginxMailingListEnglish@forum.nginx.org> A much easier option is to use labels and perl to swap comment items around in the nginx.conf file. #PLABEL #development line....; production line....; if readln eq '^#PLABEL' then { swap 2 lines comment char; } else { write unchangedline; } Or something in that order. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256733,256734#msg-256734 From nginx-forum at nginx.us Tue Feb 17 23:26:59 2015 From: nginx-forum at nginx.us (dansch8888) Date: Tue, 17 Feb 2015 18:26:59 -0500 Subject: rewrite rules cms phpwcms not working In-Reply-To: <0bcb435c8eb98912a2d073c38c0d31e9.NginxMailingListEnglish@forum.nginx.org> References: <0bcb435c8eb98912a2d073c38c0d31e9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <15a0d03630b2d61c14715c3ae6559507.NginxMailingListEnglish@forum.nginx.org> Hi Francis, thank you for this really good explanation. I tried your examples and they work very well. In the next days I will test it more extensive. > > (track|include|img|template|picture|filearchive|content|robots\.txt|favicon\.ico)($|/) > > - [L] > I think that say "anything that matches that pattern should be served > as a plain file, and not given to the php interpreter". But there are > also .htaccess files in some of those directories. Yes, that's true, but isn't it also that at the apache server this folders and files are just excluded for rewrite? In my config I set folders with .htaccess only at the moment. location ^~ /config/ { deny all; } location ^~ /filearchive/ { deny all; } location ^~ /track/ { deny all; } location ^~ /upload/ { deny all; } > if ($request_uri ~ /(\d+)\.(\d+)\.(\d+)\.(\d+)\.(\d+)\.(\d+)\.html) { > set $bit_of_qs "id=$1,$2,$3,$4,$5,$6"; > } This is comming from some old versions of the CMS I believe, were not always aliases were set up. At the moment, when you create a article, the CMS sets an dummy alias by default. In the "old" time, there were urls like this index.php?id=0.0.0.0.0.0 Now there are normaly such urls index.php?dummyalias only. > location / { > try_files $uri $uri/ @index.php; > } Is there a need for the $uri/? I had read this is for subfolder only. This CMS need the rewrite just for the root folder and index.php. Thank you so fare for this quick help. Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256693,256740#msg-256740 From rikske at deds.nl Tue Feb 17 23:41:34 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Wed, 18 Feb 2015 00:41:34 +0100 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: References: Message-ID: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> Dear, While searching for Nginx optimize patches: https://josephscott.org/archives/2014/12/nginx-1-7-8-fixes-200ms-delay-with-spdy/ Came to my attention. Can anyone confirm if the '200ms Delay With SPDY' problem also applies to Nginx 1.6.x? Thanks, Regards, Rik Ske From ben at indietorrent.org Wed Feb 18 02:30:46 2015 From: ben at indietorrent.org (Ben Johnson) Date: Tue, 17 Feb 2015 21:30:46 -0500 Subject: Nginx Upload Progress Module stops functioning once behind reverse-proxy In-Reply-To: <20150216205208.GM13461@daoine.org> References: <54E24B47.1080004@indietorrent.org> <20150216205208.GM13461@daoine.org> Message-ID: <54E3F956.5090207@indietorrent.org> On 2/16/2015 3:52 PM, Francis Daly wrote: > On Mon, Feb 16, 2015 at 02:55:51PM -0500, Ben Johnson wrote: > > Hi there, > Hi, Francis, and thank you for taking the time to review my question and respond. I value and appreciate your time. >> I've recently compiled nginx-1.7.10 with a third-party upload-progress >> tracking module, which is described at >> http://wiki.nginx.org/HttpUploadProgressModule . > > Read the first two paragraphs on that page... > >> This module works perfectly well, until I attempt to put the entire >> setup behind a reverse-proxy. > > ...and draw a picture of what happens when a client sends a 1MB upload > to nginx, and nginx sends it via proxy_pass to an upstream. That should > show you why it fails. > Abashedly, as yet, I lack the familiarity with nginx (and reverse-proxies, in general) to grasp the problem on a fundamental level. I've tried to visualize the conundrum and I can see two potential problems: - The fact that this is really a three-tiered server setup: 1) the nginx server running on port 443; 2) the nginx server running on port 1337; 3) the PHP server running on the socket. - The fact that in the config snippet I included, the server listening on port 1337 is "feeding the file to itself". Intuitively (and incorrectly, of course), it feels as though changing that second proxy_pass to proxy_pass https://127.0.0.1; would solve the problem. But, as you suggested, it does not and cannot. Any further explanation that you're willing to offer as to the fundamental problem will not fall on deaf ears. >> With this configuration, the upload progress reporting functions >> correctly *only* if I access the reverse-proxy port directly, at >> https://example.com:1337. > > Yes. > > That's what nginx does. > > Why does there need to be an upload_progress module in the first place, > instead of letting the upstream server take care of it? > Because the upstream server (PHP) performs miserably by comparison when handling massive file uploads (often in excess of 1GB), and doesn't support the most important (to me) features of this nginx module, coupled with the nginx Upload Module ( http://www.grid.net.ru/nginx/upload.en.html ). In particular, I need the ability to track upload progress, server-side, *and* support user-agents resuming uploads that fail for any reason. > In this case, you have put your upload_progress module on your upstream > server. > That's fine. I was hoping to avoid it, but I have control over the entire environment, so it's not a show-stopper. I was hoping to implement the reverse-proxy configuration so that the file-hungry virtual-host would have minimal potential to affect the handful of other virtual-hosts running under the same nginx instance. The idea was to "isolate" the virtual-host that employs these modules so that it could be restarted, be replaced with a more recent version of nginx, etc., with no real downtime risk to the upstream nginx instance. Imagine a hosting environment in which the "top-most" (furthest upstream) virtual-hosts are critical and should never drop offline for any period of time, but wherein it's no big deal if the "child" nginx instances go down for whatever reason. This was the topology I was hoping to implement. > Not going to work. At least until you can use an "unbuffered > upload". Which is not in current nginx. (But is in other things.) > > f > I would rather push this functionality upstream and consolidate all virtual-hosts, incurring whatever risk of downtime is introduced as a result, than replace nginx with another piece of software. Thanks again for assuring me that this simply won't work as-is and that the best and only course of action is to move the virtual-host in question upstream. Eager to hear any additional thoughts or critiques you may have re: the justification that I outlined. Maybe I'm "doing it wrong". Respectfully, -Ben From vbart at nginx.com Wed Feb 18 08:26:39 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Feb 2015 11:26:39 +0300 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> Message-ID: <9084820.MalaW2N874@vbart-laptop> On Wednesday 18 February 2015 00:41:34 rikske at deds.nl wrote: > Dear, > > While searching for Nginx optimize patches: > > https://josephscott.org/archives/2014/12/nginx-1-7-8-fixes-200ms-delay-with-spdy/ > > Came to my attention. > > Can anyone confirm if the '200ms Delay With SPDY' problem also applies to > Nginx 1.6.x? > [..] It applies to all versions up to 1.7.8. Also note, that it's always a good idea (and strongly recommended) to stick to the latest version especially if you're using new or/and experimental modules like SPDY. The "stable" branch receives only critical bugfixes, and misses many other. See also: http://nginx.com/blog/nginx-1-6-1-7-released/ wbr, Valentin V. Bartenev From rikske at deds.nl Wed Feb 18 14:42:12 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Wed, 18 Feb 2015 15:42:12 +0100 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <9084820.MalaW2N874@vbart-laptop> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> Message-ID: <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> > On Wednesday 18 February 2015 00:41:34 rikske at deds.nl wrote: >> Dear, >> >> While searching for Nginx optimize patches: >> >> https://josephscott.org/archives/2014/12/nginx-1-7-8-fixes-200ms-delay-with-spdy/ >> >> Came to my attention. >> >> Can anyone confirm if the '200ms Delay With SPDY' problem also applies >> to >> Nginx 1.6.x? >> > [..] > > It applies to all versions up to 1.7.8. > > Also note, that it's always a good idea (and strongly recommended) to > stick > to the latest version especially if you're using new or/and experimental > modules like SPDY. > > The "stable" branch receives only critical bugfixes, and misses many > other. > > See also: http://nginx.com/blog/nginx-1-6-1-7-released/ > > wbr, Valentin V. Bartenev > On Wednesday 18 February 2015 00:41:34 rikske at deds.nl wrote: >> Dear, >> >> While searching for Nginx optimize patches: >> >> https://josephscott.org/archives/2014/12/nginx-1-7-8-fixes-200ms-delay-with-spdy/ >> >> Came to my attention. >> >> Can anyone confirm if the '200ms Delay With SPDY' problem also applies >> to >> Nginx 1.6.x? >> > [..] > > It applies to all versions up to 1.7.8. > > Also note, that it's always a good idea (and strongly recommended) to > stick > to the latest version especially if you're using new or/and experimental > modules like SPDY. > > The "stable" branch receives only critical bugfixes, and misses many > other. > > See also: http://nginx.com/blog/nginx-1-6-1-7-released/ > > wbr, Valentin V. Bartenev > Hi Valentin, Thank you very much for the comment. I'd appreciate it a lot. >From server production perspective, The 'mainline' branch is not really useful. Not only the SPDY code changed but everything changed or can change. This gives a lot of extra time to test everything through and through. So it is strongly recommended to use stable code as far as possible in any production environment. And about experimental modules like SPDY. Yes you are right its experimental. But for web servers you have no choice now days. You can't optimize a site to the max without SPDY. Coming back to 200ms. 200ms extra, unnecessary delay is deadly for a web server. All the way in a world dominated by internet search engines where every ms counts. It's like a heart attack. I do not understand that this problem is not referred as critical bug. It should even be a blocker. Your changelog writes about: 'now the "tcp_nodelay" directive works with SPDY connections'. This is a major loss of function because the existing 1.6. tcp_nodelay won't work as it should be. Can you re-evaluate a patch for 1.6.? Thanks, Greetings, Rik Ske From maxim at nginx.com Wed Feb 18 15:43:27 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 18 Feb 2015 18:43:27 +0300 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> Message-ID: <54E4B31F.1020405@nginx.com> On 2/18/15 5:42 PM, rikske at deds.nl wrote: > You can't optimize a site to the max without SPDY. That's doesn't match our experience. Also, enabling SPDY doesn't make your site faster automagically. It is not a silver bullet. -- Maxim Konovalov http://nginx.com From nginx-forum at nginx.us Wed Feb 18 15:56:43 2015 From: nginx-forum at nginx.us (ragavd) Date: Wed, 18 Feb 2015 10:56:43 -0500 Subject: Expected Server configuration for 100 users Message-ID: Hi, We are configuring the NGINX as a reverse proxy. We are expecting some 100 concurrent users or connections/sessions to be active at any given moment of time. Right now the server is acting as a reverse proxy for only one application. These concurrent users will connect predominantly between 6:00 AM to 7:00 PM. Based on this what should be the RAM and CPU configuration for the NGINX server? Also is there a guideline or a blog entry which we can use to approximate the system requirements of NGINX servers based on the concurrent user load? And also what will be preferred OS for NGINX server? _______________________ Have a nice day Ragavie D Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256761,256761#msg-256761 From rikske at deds.nl Wed Feb 18 15:57:24 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Wed, 18 Feb 2015 16:57:24 +0100 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <54E4B31F.1020405@nginx.com> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> <54E4B31F.1020405@nginx.com> Message-ID: <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> > On 2/18/15 5:42 PM, rikske at deds.nl wrote: >> You can't optimize a site to the max without SPDY. > > That's doesn't match our experience. > > Also, enabling SPDY doesn't make your site faster automagically. It > is not a silver bullet. > > -- > Maxim Konovalov > http://nginx.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Hi Maxim, That is the same as the website is not working after installing Nginx. Every situation requires configuration. Greetings, Rik SKe From maxim at nginx.com Wed Feb 18 16:05:34 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 18 Feb 2015 19:05:34 +0300 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> <54E4B31F.1020405@nginx.com> <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> Message-ID: <54E4B84E.90608@nginx.com> On 2/18/15 6:57 PM, rikske at deds.nl wrote: >> On 2/18/15 5:42 PM, rikske at deds.nl wrote: >>> You can't optimize a site to the max without SPDY. >> >> That's doesn't match our experience. >> >> Also, enabling SPDY doesn't make your site faster automagically. It >> is not a silver bullet. >> >> -- >> Maxim Konovalov >> http://nginx.com >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > Hi Maxim, > > That is the same as the website is not working after installing Nginx. > Every situation requires configuration. > I've just tried to point out that assertion "you can't optimize a site to the max without SPDY" is not always true. Btw, one of the case I am talking about is the site in the alexa top100. -- Maxim Konovalov http://nginx.com From rikske at deds.nl Wed Feb 18 16:16:11 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Wed, 18 Feb 2015 17:16:11 +0100 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <54E4B84E.90608@nginx.com> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> <54E4B31F.1020405@nginx.com> <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> <54E4B84E.90608@nginx.com> Message-ID: > On 2/18/15 6:57 PM, rikske at deds.nl wrote: >>> On 2/18/15 5:42 PM, rikske at deds.nl wrote: >>>> You can't optimize a site to the max without SPDY. >>> >>> That's doesn't match our experience. >>> >>> Also, enabling SPDY doesn't make your site faster automagically. It >>> is not a silver bullet. >>> >>> -- >>> Maxim Konovalov >>> http://nginx.com >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> Hi Maxim, >> >> That is the same as the website is not working after installing Nginx. >> Every situation requires configuration. >> > I've just tried to point out that assertion "you can't optimize a > site to the max without SPDY" is not always true. > > Btw, one of the case I am talking about is the site in the alexa top100. > > -- > Maxim Konovalov > http://nginx.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Hi Maxim, Understood. Apart from the fact what SPDY can mean for someone specific. There is a flaw in and which prevents "tcp_nodelay" in Nginx 1.6. to function correctly. How to fix that. Gr Rik Ske From maxim at nginx.com Wed Feb 18 16:24:08 2015 From: maxim at nginx.com (Maxim Konovalov) Date: Wed, 18 Feb 2015 19:24:08 +0300 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> <54E4B31F.1020405@nginx.com> <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> <54E4B84E.90608@nginx.com> Message-ID: <54E4BCA8.7040901@nginx.com> [...] > Hi Maxim, > > Understood. Apart from the fact what SPDY can mean for someone specific. > There is a flaw in and which prevents "tcp_nodelay" in Nginx 1.6. to > function correctly. > > How to fix that. > There are several options: - you can backport 1.7 diff to 1.6; - you can use 1.7 in production (this is what we actually recommend to do; e.g. nginx-plus based on 1.7 branch currently). We will discuss merging this code to 1.6 for the next release but we don't have a schedule for 1.6.3 yet. -- Maxim Konovalov http://nginx.com From rikske at deds.nl Wed Feb 18 16:35:17 2015 From: rikske at deds.nl (rikske at deds.nl) Date: Wed, 18 Feb 2015 17:35:17 +0100 Subject: 200ms Delay With SPDY - Nginx 1.6.x ? In-Reply-To: <54E4BCA8.7040901@nginx.com> References: <973d1b5e791d7b765230ff51094998df.squirrel@deds.nl> <9084820.MalaW2N874@vbart-laptop> <1939b93a165cad3e75abe5a91a604ace.squirrel@deds.nl> <54E4B31F.1020405@nginx.com> <193911151aafee54da2387b2106ddeb5.squirrel@deds.nl> <54E4B84E.90608@nginx.com> <54E4BCA8.7040901@nginx.com> Message-ID: > [...] >> Hi Maxim, >> >> Understood. Apart from the fact what SPDY can mean for someone specific. >> There is a flaw in and which prevents "tcp_nodelay" in Nginx 1.6. to >> function correctly. >> >> How to fix that. >> > There are several options: > > - you can backport 1.7 diff to 1.6; > > - you can use 1.7 in production (this is what we actually recommend > to do; e.g. nginx-plus based on 1.7 branch currently). > > We will discuss merging this code to 1.6 for the next release but we > don't have a schedule for 1.6.3 yet. > > -- > Maxim Konovalov > http://nginx.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Hi Maxim, Thanks, your help is mutch appriciated. Regards, Rik Ske From daniel at mostertman.org Wed Feb 18 16:57:13 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Wed, 18 Feb 2015 17:57:13 +0100 Subject: 'add_header' no longer allowed inside 'if', even though document says it is? Message-ID: <54E4C469.1050703@mostertman.org> Hi! I'm currently running 1.7.10 mainline straight from the nginx.org repository. We are hosting an application that needs to be accessible to Internet Explorer users, in addition to all other *normal* browsers. tl;dr: I want do have an add_header inside an if {}. nginx 1.7.10 won't let me. I'm trying to add the following header, which WORKS JUST FINE in all other browser but IE: server { ... add_header Content-Security-Policy " default-src 'self' https://*.example.nl https://*.example.net; connect-src 'self' https://*.example.nl https://*.example.net; font-src 'self' data: https://*.example.nl https://*.example.net; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://*.example.nl https://*.example.net; style-src 'self' 'unsafe-inline'; img-src 'self' data: https://*.example.nl https://*.example.net; frame-src 'self'; object-src 'self' 'unsafe-inline'; "; } In Chrome and Firefox, this works like a charm. But Internet Explorer goes absolutely haywire on it. According to http://content-security-policy.com/ .. Internet Explorer 10 has limited support for X-Content-Security-Policy, and no IE has support for Content-Security-Policy. In reality, that's not really true. I found that accessing the site with IE11, results in a badly rendered page that could be classified as "not working". Remove the header, and everything works absolutely fine in IE11. If I load the page in IE11 and hit F12, then change it to MS10 compatibility, it throws a *DNS* error. Yes, I kid you not, DNS. Remove the header, and everything works absolutely fine in IE10 compatibility mode. In an attempt to keep the header for all other browsers but MSIE, I wanted to do the following instead: server { ... if ($http_user_agent ~ MSIE ) { add_header Content-Security-Policy " default-src 'self' https://*.example.nl https://*.example.net; connect-src 'self' https://*.example.nl https://*.example.net; font-src 'self' data: https://*.example.nl https://*.example.net; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://*.example.nl https://*.example.net; style-src 'self' 'unsafe-inline'; img-src 'self' data: https://*.example.nl https://*.example.net; frame-src 'self'; object-src 'self' 'unsafe-inline'; "; } } According to both http://wiki.nginx.org/IfIsEvil and http://nginx.org/en/docs/http/ngx_http_headers_module.html (see Context of add_header), it should be allowed inside an if. But yet: root:~# nginx -t nginx: [emerg] "add_header" directive is not allowed here in /etc/nginx/sites-enabled/webtv-test:37 nginx: configuration file /etc/nginx/nginx.conf test failed root:~# What am I doing wrong, if anything? And if I can avoid using "if" like that, I'd obviously prefer that. Kind regards, Dani?l From daniel at mostertman.org Wed Feb 18 17:02:24 2015 From: daniel at mostertman.org (=?windows-1252?Q?Dani=EBl_Mostertman?=) Date: Wed, 18 Feb 2015 18:02:24 +0100 Subject: 'add_header' no longer allowed inside 'if', even though document says it is? In-Reply-To: <54E4C469.1050703@mostertman.org> References: <54E4C469.1050703@mostertman.org> Message-ID: <54E4C5A0.5060208@mostertman.org> Hi again, And ugh, yet again realising what I did wrong right after hitting "send": Dani?l Mostertman schreef op 18-2-2015 om 17:57: > tl;dr: I want do have an add_header inside an if {}. nginx 1.7.10 > won't let me. Because I didn't put the if inside a location. > According to both http://wiki.nginx.org/IfIsEvil and > http://nginx.org/en/docs/http/ngx_http_headers_module.html (see > Context of add_header), it should be allowed inside an if. > But yet: > > root:~# nginx -t > nginx: [emerg] "add_header" directive is not allowed here in > /etc/nginx/sites-enabled/webtv-test:37 > nginx: configuration file /etc/nginx/nginx.conf test failed > root:~# > > What am I doing wrong, if anything? And if I can avoid using "if" like > that, I'd obviously prefer that. Answering my own question: "add_header" can only be inside an "if" that's inside a "location". I have no "location" for the root, so I figured it could be inside "server". Dani?l From vbart at nginx.com Wed Feb 18 17:12:25 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 Feb 2015 20:12:25 +0300 Subject: 'add_header' no longer allowed inside 'if', even though document says it is? In-Reply-To: <54E4C469.1050703@mostertman.org> References: <54E4C469.1050703@mostertman.org> Message-ID: <1576597.3kiH5GG5qY@vbart-workstation> On Wednesday 18 February 2015 17:57:13 Dani?l Mostertman wrote: [..] > What am I doing wrong, if anything? And if I can avoid using "if" like > that, I'd obviously prefer that. > You can avoid it by using the map directive and a variable as the add_header value. Like this: map $http_user_agent $csp { ~MSIE ""; default " default-src 'self' https://*.example.nl https://*.example.net; connect-src 'self' https://*.example.nl https://*.example.net; font-src 'self' data: https://*.example.nl https://*.example.net; script-src 'self' 'unsafe-inline' 'unsafe-eval' https://*.example.nl https://*.example.net; style-src 'self' 'unsafe-inline'; img-src 'self' data: https://*.example.nl https://*.example.net; frame-src 'self'; object-src 'self' 'unsafe-inline'; "; } add_header Content-Security-Policy $csp; -- wbr, Valentin V. Bartenev From lists at wildgooses.com Wed Feb 18 17:21:05 2015 From: lists at wildgooses.com (Ed W) Date: Wed, 18 Feb 2015 17:21:05 +0000 Subject: How to force CGI cache purge on POST? Message-ID: <54E4CA01.1050809@wildgooses.com> Hi, I have a very simple website which could benefit from enabling cgi caching. However, occasional updates are made to the site, exclusively via a POST request. For my simple purposes it would be sufficient to expire the entire cache on any POST request (few hundred pages cached). How might I trigger this to happen without modifying the backend CGI? ie is it possible to make an nginx location which can trigger fastcgi_cache_purge on any POST request? Thanks for any thoughts Ed W From rainer at ultra-secure.de Wed Feb 18 19:01:34 2015 From: rainer at ultra-secure.de (Rainer Duffner) Date: Wed, 18 Feb 2015 20:01:34 +0100 Subject: Expected Server configuration for 100 users In-Reply-To: References: Message-ID: > Am 18.02.2015 um 16:56 schrieb ragavd : > > Hi, > We are configuring the NGINX as a reverse proxy. We are expecting some 100 > concurrent users or connections/sessions to be active at any given moment of > time. Right now the server is acting as a reverse proxy for only one > application. These concurrent users will connect predominantly between 6:00 > AM to 7:00 PM. Based on this what should be the RAM and CPU configuration > for the NGINX server? > What?s your hardware? It all depends on a couple of configuration-parameters. http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers If I?m correct, the default would be 8*4k=32k per connection, which would result in a memory usage of 3.2MB with 100 concurrent connections. Of course, nginx itself would also need some memory. But nginx is quite thrifty IME. > Also is there a guideline or a blog entry which we can use to approximate > the system requirements of NGINX servers based on the concurrent user load? I use the above formula as a guideline. But I haven?t really had a situation where I was ever coming close to hitting a limit on the hardware or our (very modest) worker_connections default. If we get DDoSed, it?s usually so much crap-traffic that we have to route the affected network through a mitigation-service. > And also what will be preferred OS for NGINX server? I think the Tier 1 platforms are: - FreeBSD 9+10 AMD64 - Cent OS 6+7 AMD64 - Ubuntu 12+14 AMD64 NGINX Plus supports a couple of additional platforms, but I would assume the majority of the installation-base is on any of these three (and then some Debian installs). http://nginx.com/products/technical-specs/ For your use-case, it doesn?t really matter what OS you run, as long as it?s one of the above (or you know it really well). I think FreeBSD9+10 have the lowest hardware requirements, even without special tuning. From stl at wiredrive.com Wed Feb 18 19:32:08 2015 From: stl at wiredrive.com (Scott Larson) Date: Wed, 18 Feb 2015 11:32:08 -0800 Subject: Expected Server configuration for 100 users In-Reply-To: References: Message-ID: I can second the fact FreeBSD + nginx is a rocking combo. We've been running that for years under ever increasing traffic and it only requires a few basic adjustments to the OS, even fewer in 10 since a lot of system defaults were cranked up for modern times. Our current hardware handling nginx related duties are sporting Intel L5520 procs and have been in service since 2009, only now in the process of being replaced due to aging. If systems had a guaranteed infinite life these boxes would still be good for quite some time. If you go the Linux path it's probably a similar situation, nginx is likely not going to be the part of your stack which cracks first. *__________________Scott LarsonSystems AdministratorWiredrive/LA310 823 8238 ext. 1106310 943 2078 faxwww.wiredrive.com www.twitter.com/wiredrive www.facebook.com/wiredrive * On Wed, Feb 18, 2015 at 11:01 AM, Rainer Duffner wrote: > > > Am 18.02.2015 um 16:56 schrieb ragavd : > > > > Hi, > > We are configuring the NGINX as a reverse proxy. We are expecting some > 100 > > concurrent users or connections/sessions to be active at any given > moment of > > time. Right now the server is acting as a reverse proxy for only one > > application. These concurrent users will connect predominantly between > 6:00 > > AM to 7:00 PM. Based on this what should be the RAM and CPU configuration > > for the NGINX server? > > > > > What?s your hardware? > > It all depends on a couple of configuration-parameters. > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_buffers > > If I?m correct, the default would be > 8*4k=32k per connection, which would result in a memory usage of 3.2MB > with 100 concurrent connections. > Of course, nginx itself would also need some memory. > But nginx is quite thrifty IME. > > > Also is there a guideline or a blog entry which we can use to approximate > > the system requirements of NGINX servers based on the concurrent user > load? > > > I use the above formula as a guideline. > > But I haven?t really had a situation where I was ever coming close to > hitting a limit on the hardware or our (very modest) worker_connections > default. > If we get DDoSed, it?s usually so much crap-traffic that we have to route > the affected network through a mitigation-service. > > > > And also what will be preferred OS for NGINX server? > > > I think the Tier 1 platforms are: > - FreeBSD 9+10 AMD64 > - Cent OS 6+7 AMD64 > - Ubuntu 12+14 AMD64 > > NGINX Plus supports a couple of additional platforms, but I would assume > the majority of the installation-base is on any of these three (and then > some Debian installs). > > http://nginx.com/products/technical-specs/ > > For your use-case, it doesn?t really matter what OS you run, as long as > it?s one of the above (or you know it really well). > > I think FreeBSD9+10 have the lowest hardware requirements, even without > special tuning. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Feb 18 21:13:12 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Feb 2015 21:13:12 +0000 Subject: Nginx Upload Progress Module stops functioning once behind reverse-proxy In-Reply-To: <54E3F956.5090207@indietorrent.org> References: <54E24B47.1080004@indietorrent.org> <20150216205208.GM13461@daoine.org> <54E3F956.5090207@indietorrent.org> Message-ID: <20150218211312.GN13461@daoine.org> On Tue, Feb 17, 2015 at 09:30:46PM -0500, Ben Johnson wrote: > On 2/16/2015 3:52 PM, Francis Daly wrote: Hi there, > > ...and draw a picture of what happens when a client sends a 1MB upload > > to nginx, and nginx sends it via proxy_pass to an upstream. That should > > show you why it fails. > Any further explanation that you're willing to offer as to the > fundamental problem will not fall on deaf ears. nginx buffers the entire request before sending anything to upstream. So upstream sees nothing while the client is sending, and then sees everything, fed by the (presumably) fast link between it and nginx. So the only place a progress meter can usefully be is on the initial nginx, which is talking directly with the client. Anything upstream of that will show 0% until it shows 100%, which is not really a "progress" meter. > > Why does there need to be an upload_progress module in the first place, > > instead of letting the upstream server take care of it? > > Because the upstream server (PHP) performs miserably by comparison when > handling massive file uploads (often in excess of 1GB), Nope. At least, that's not the main reason. The upstream server does not *see* the progress, and so cannot report on it. > > Not going to work. At least until you can use an "unbuffered > > upload". Which is not in current nginx. (But is in other things.) > I would rather push this functionality upstream and consolidate all > virtual-hosts, incurring whatever risk of downtime is introduced as a > result, than replace nginx with another piece of software. Search for "nginx unbuffered upload". You can possibly try patching yourself, or wait until stock nginx supports it, or offer suitable encouragement to get stock nginx to support it on your preferred timetable. But until it exists, only the thing talking to the client can offer any kind of progress meter. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Feb 18 21:21:00 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 18 Feb 2015 21:21:00 +0000 Subject: rewrite rules cms phpwcms not working In-Reply-To: <15a0d03630b2d61c14715c3ae6559507.NginxMailingListEnglish@forum.nginx.org> References: <0bcb435c8eb98912a2d073c38c0d31e9.NginxMailingListEnglish@forum.nginx.org> <15a0d03630b2d61c14715c3ae6559507.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150218212100.GO13461@daoine.org> On Tue, Feb 17, 2015 at 06:26:59PM -0500, dansch8888 wrote: Hi there, > I tried your examples and they work very well. > In the next days I will test it more extensive. Good to hear, thanks. > > > > (track|include|img|template|picture|filearchive|content|robots\.txt|favicon\.ico)($|/) > > > - [L] > > I think that say "anything that matches that pattern should be served > > as a plain file, and not given to the php interpreter". But there are > > also .htaccess files in some of those directories. > > Yes, that's true, but isn't it also that at the apache server this folders > and files are just excluded for rewrite? That sounds correct. I stated it badly. "Do not rewrite, and therefore do not fall through to /index.php, for these requests." They are handled by whatever else the apache config says. > > if ($request_uri ~ /(\d+)\.(\d+)\.(\d+)\.(\d+)\.(\d+)\.(\d+)\.html) { > > set $bit_of_qs "id=$1,$2,$3,$4,$5,$6"; > > } > > This is comming from some old versions of the CMS I believe, were not always > aliases were set up. > At the moment, when you create a article, the CMS sets an dummy alias by > default. > In the "old" time, there were urls like this index.php?id=0.0.0.0.0.0 > Now there are normaly such urls index.php?dummyalias only. If that part isn't needed, then so much the better. All nginx cares about is: for incoming requests, how should they be handled? > > location / { > > try_files $uri $uri/ @index.php; > > } > > Is there a need for the $uri/? The apache config seemed not to do the rewrite (= /index.php fallback) if the request was for something that was recognised as an existing directory. This is the same. I do not know if it is needed in your implementation. > I had read this is for subfolder only. This CMS need the rewrite just for > the root folder and index.php. It was not clear to me how a request for /dir/file.html should be handled, depending on the existence of "dir" and "file.html". If that kind of request matters in the cms, you should probably test that it does what you want. Good luck with it, f -- Francis Daly francis at daoine.org From k at datadoghq.com Thu Feb 19 02:25:01 2015 From: k at datadoghq.com (K Young) Date: Wed, 18 Feb 2015 21:25:01 -0500 Subject: Connection statuses Message-ID: Hello, I've been reading through nginx source to understand the metrics that are reported. I've got an idea of what is happening, but since the flow is asynchronous, I'm not 100% confident, and would love your feedback. Is the following correct? - 'accepts' is incremented when a worker gets a new request for a connection from the OS - 'handled' is incremented when a worker gets a new connection for a request And then once a connection is opened: - 'active' is incremented (actually, it is incremented right before 'handled', but will be decremented if the worker doesn't handle the connection request). - then the connection briefly goes into a waiting state while the request is waiting to be processed - then the connection goes into a short reading state while request headers are read. Simultaneously, 'request' is incremented every time a new request header begins to be read. - then the connection goes into a writing state while all work is done by nginx and by upstream components - then if the connection will be kept alive, it goes back into waiting state, which is synonymous with 'idle' - finally when the connection is closed, active is decremented The things I'm least certain of about the 'waiting' state: - Does active always sum up to waiting+reading+writing? - Does each new connection enter a waiting state just before it goes into the reading state? - While waiting for upstream responses, is the connection in writing state or waiting state? - While waiting for new client requests on an open connection, is the connection in a waiting state? Is the attached sketch of the above connection states correct? (underneath "READ" it says "request++" to indicate that this is where the request counter gets incremented). [image: Inline image 1] Thanks very much in advance for any help you can provide, ~K -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_20150218_175749.jpg Type: image/jpeg Size: 121102 bytes Desc: not available URL: From kpariani at zimbra.com Thu Feb 19 03:03:47 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Wed, 18 Feb 2015 21:03:47 -0600 (CST) Subject: $upstream_addr containing the same ip twice Message-ID: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> Hello, I have just 1 backend server being reverse-proxied through nginx. The access log lists this one request for which the $upstream_addr has the same ip:port twice. Is this a bug ? ::ffff:10.255.255.248:51947 - - [18/Feb/2015:19:52:43 -0600] "GET / HTTP/1.1" 302 454 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:35.0) Gecko/20100101 Firefox/35.0" "a.b.c.d:e, a.b.c.d:e " This is how the the log_format is defined log_format upstream '$remote_addr:$remote_port - $remote_user [$time_local] ' '"$request" $status $bytes_sent ' '"$http_referer" "$http_user_agent" "$upstream_addr"'; Thanks -Kunal -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 19 06:59:03 2015 From: nginx-forum at nginx.us (macmichael01) Date: Thu, 19 Feb 2015 01:59:03 -0500 Subject: Second domain config does not redirect www traffic Message-ID: <7c8be32b98f25cc3f79e86c68ec38041.NginxMailingListEnglish@forum.nginx.org> Hello, I have ngnix installed to serve up to websites. The first one is behind is behind SSL and should redirect everything to https://EXAMPLE1.com. This works perfectly as for the second website, its only being hosted on http. All www traffic should redirect to non-www but it fails. Here is my nginx config: http://pastie.org/private/ziromp4jqkbxehk5xjiha Does conf loading order matter? These are the files as they appear in the nginx conf directory: sites-enabled/EXAMPLE1.conf sites-enabled/EXAMPLE2.conf Could this be a DNS issue? www.EXAMPLE1.com, www.example2.com, example1.com, example2.com all have A Host name records defined. Should I use a Cname record for the www.* names Here is what works and what fails: WORKS - http://www.EXAMPLE1.com redirects to https://EXAMPLE1.com WORKS - http://EXAMPLE1.com redirects to https://EXAMPLE1.com WORKS - https://EXAMPLE1.com WORKS - http://EXAMPLE2.com BROKE - http://www.EXAMPLE2.com fails to redirect to http://EXAMPLE2.com. I get a welcome to nginx screen. Any ideas? Thanks so much for any help that can be provided! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256780,256780#msg-256780 From francis at daoine.org Thu Feb 19 08:05:23 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 19 Feb 2015 08:05:23 +0000 Subject: Second domain config does not redirect www traffic In-Reply-To: <7c8be32b98f25cc3f79e86c68ec38041.NginxMailingListEnglish@forum.nginx.org> References: <7c8be32b98f25cc3f79e86c68ec38041.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150219080523.GP13461@daoine.org> On Thu, Feb 19, 2015 at 01:59:03AM -0500, macmichael01 wrote: > Here is my nginx config: http://pastie.org/private/ziromp4jqkbxehk5xjiha It is usually better to include the config in the mail; that url is unlikely to be accessible wherever the mail archives are read. But I looked at it anyway. Your config is syntactically correct, but semantically incorrect. You're missing a semicolon on line 31. server { listen 80; server_name www.EXAMPLE2.com return 301 http://EXAMPLE2.com$request_uri; } This defines a port 80 listener with four server_name values and no extra handling for requests. (See also line 6.) > Does conf loading order matter? If you use directives that care about order, yes. If you let nginx use order to determine defaults, yes. Otherwise, no. And current nginx loads "*"-matches alphabetically. > Could this be a DNS issue? If your log file shows that the request is getting to nginx, no. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu Feb 19 08:36:53 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 19 Feb 2015 03:36:53 -0500 Subject: [ANN] Windows nginx 1.7.11.1 Gryphon Message-ID: 13:31 18-2-2015 nginx 1.7.11.1 Gryphon Based on nginx 1.7.11 (17-2-2015, last changeset 5984:3f568dd68af1) with; * Documentation repository http://nginx-win.ecsds.eu/download/documentation-pdf/ + Naxsi WAF v0.53-3 (upgraded 16-2-2015) * See 'ramdisk_setup v3.4.6.exe' on site, speedup your microcache 500x * (PHP) xcache and (PHP 5.5+) opcache examples in /conf + lua-nginx-module v0.9.14 (upgraded 16-2-2015) * nginx for Windows is not affected by CVE-2015-0235 (Ghost) + nginx-auth-ldap (upgraded 22-1-2015) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256783,256783#msg-256783 From nginx-forum at nginx.us Thu Feb 19 08:48:20 2015 From: nginx-forum at nginx.us (macmichael01) Date: Thu, 19 Feb 2015 03:48:20 -0500 Subject: Second domain config does not redirect www traffic In-Reply-To: <20150219080523.GP13461@daoine.org> References: <20150219080523.GP13461@daoine.org> Message-ID: Thanks! The missing semicolon fixed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256780,256784#msg-256784 From nginx-forum at nginx.us Thu Feb 19 09:49:28 2015 From: nginx-forum at nginx.us (scaarup) Date: Thu, 19 Feb 2015 04:49:28 -0500 Subject: Logging to syslog Message-ID: <0cea6cf7b1d5e061188cdb4ba0ccacdf.NginxMailingListEnglish@forum.nginx.org> Hi all. I am logging to syslog with the following configuration: log_format custom '$remote_addr $remote_user ' '"$request" $status $body_bytes_sent ' '"$http_referer" "$http_user_agent" UPSTREAM: $upstream_addr SSL: $ssl_protocol $ssl_cipher $ssl_session_reused TIME: $request_time'; access_log syslog:server=localhost,facility=local2 custom; error_log syslog:server=localhost,facility=local1 info; Access.log entries looks like this: Feb 19 10:39:50 localhost nginx: 192.168.11.18 - "GET /%%% HTTP/1.1" 400 166 "-" "-" UPSTREAM: - SSL: TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 r TIME: 0.008 And error.log entries looks like this: Feb 19 10:39:19 localhost nginx: 2015/02/19 10:39:19 [info] 53270#0: *1032 client sent invalid request while reading client request line, client: 192.168.11.18, server: payment.architrade.com, request: "GET /%%% HTTP/1.1" As you can see, the error log has two timestamps. How do I get rid of the one? My rsyslog-conf is handling local1 and local2 the same way, so I am thinking, since error_log directive has no log_format, nginx sends over a timestamp by default. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256786,256786#msg-256786 From ar at xlrs.de Thu Feb 19 10:25:27 2015 From: ar at xlrs.de (Axel) Date: Thu, 19 Feb 2015 11:25:27 +0100 Subject: Logging to syslog In-Reply-To: <0cea6cf7b1d5e061188cdb4ba0ccacdf.NginxMailingListEnglish@forum.nginx.org> References: <0cea6cf7b1d5e061188cdb4ba0ccacdf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2373394.CUJ1BdnuUy@lxrs> Hi, have you checked that it's not your logserver ho adds a timestamp itself? I'm not sure, but afair rsyslog adds own timestamps and you have to use a template to get rid of them. Regards, Axel Am Donnerstag, 19. Februar 2015, 04:49:28 schrieb scaarup: > Hi all. > I am logging to syslog with the following configuration: > log_format custom '$remote_addr $remote_user ' > '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent" UPSTREAM: > $upstream_addr SSL: $ssl_protocol $ssl_cipher $ssl_session_reused TIME: > $request_time'; > access_log syslog:server=localhost,facility=local2 custom; > error_log syslog:server=localhost,facility=local1 info; > Access.log entries looks like this: > Feb 19 10:39:50 localhost nginx: 192.168.11.18 - "GET /%%% HTTP/1.1" 400 166 > "-" "-" UPSTREAM: - SSL: TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 r TIME: 0.008 > And error.log entries looks like this: > Feb 19 10:39:19 localhost nginx: 2015/02/19 10:39:19 [info] 53270#0: *1032 > client sent invalid request while reading client request line, client: > 192.168.11.18, server: payment.architrade.com, request: "GET /%%% HTTP/1.1" > > As you can see, the error log has two timestamps. How do I get rid of the > one? My rsyslog-conf is handling local1 and local2 the same way, so I am > thinking, since error_log directive has no log_format, nginx sends over a > timestamp by default. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256786,256786#msg-256786 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From vl at nginx.com Thu Feb 19 10:40:01 2015 From: vl at nginx.com (Vladimir Homutov) Date: Thu, 19 Feb 2015 13:40:01 +0300 Subject: Logging to syslog In-Reply-To: <0cea6cf7b1d5e061188cdb4ba0ccacdf.NginxMailingListEnglish@forum.nginx.org> References: <0cea6cf7b1d5e061188cdb4ba0ccacdf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150219104000.GA23279@vlpc.nginx.com> On Thu, Feb 19, 2015 at 04:49:28AM -0500, scaarup wrote: > Hi all. > I am logging to syslog with the following configuration: > log_format custom '$remote_addr $remote_user ' > '"$request" $status $body_bytes_sent ' > '"$http_referer" "$http_user_agent" UPSTREAM: > $upstream_addr SSL: $ssl_protocol $ssl_cipher $ssl_session_reused TIME: > $request_time'; > access_log syslog:server=localhost,facility=local2 custom; > error_log syslog:server=localhost,facility=local1 info; > Access.log entries looks like this: > Feb 19 10:39:50 localhost nginx: 192.168.11.18 - "GET /%%% HTTP/1.1" 400 166 > "-" "-" UPSTREAM: - SSL: TLSv1.2 ECDHE-RSA-AES128-GCM-SHA256 r TIME: 0.008 > And error.log entries looks like this: > Feb 19 10:39:19 localhost nginx: 2015/02/19 10:39:19 [info] 53270#0: *1032 > client sent invalid request while reading client request line, client: > 192.168.11.18, server: payment.architrade.com, request: "GET /%%% HTTP/1.1" > > As you can see, the error log has two timestamps. How do I get rid of the > one? My rsyslog-conf is handling local1 and local2 the same way, so I am > thinking, since error_log directive has no log_format, nginx sends over a > timestamp by default. > nginx send to remote server exactly same message as it would write to the disk and adds syslog header to it. If you care about duplication of timestamps, you can configure your syslog server to process incoming messages intelligently and ignore some fields for specific clients. http://www.rsyslog.com/doc/syslog_parsing.html From nginx-forum at nginx.us Thu Feb 19 11:37:22 2015 From: nginx-forum at nginx.us (scaarup) Date: Thu, 19 Feb 2015 06:37:22 -0500 Subject: Logging to syslog In-Reply-To: <2373394.CUJ1BdnuUy@lxrs> References: <2373394.CUJ1BdnuUy@lxrs> Message-ID: <8629e7d97c94378faa6b22806e084da7.NginxMailingListEnglish@forum.nginx.org> Yes rsyslog adds the first timestamp as I have configured it to. But it does not add the second. So is it a feature or bug, that you can configure nginx to send timestamp on acccess_log but not on error_log? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256786,256790#msg-256790 From nginx-forum at nginx.us Thu Feb 19 11:38:30 2015 From: nginx-forum at nginx.us (scaarup) Date: Thu, 19 Feb 2015 06:38:30 -0500 Subject: Logging to syslog In-Reply-To: <8629e7d97c94378faa6b22806e084da7.NginxMailingListEnglish@forum.nginx.org> References: <2373394.CUJ1BdnuUy@lxrs> <8629e7d97c94378faa6b22806e084da7.NginxMailingListEnglish@forum.nginx.org> Message-ID: What I am saying is, that there should be a log_format for error_log as well. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256786,256791#msg-256791 From mdounin at mdounin.ru Thu Feb 19 13:02:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Feb 2015 16:02:20 +0300 Subject: Connection statuses In-Reply-To: References: Message-ID: <20150219130220.GI19012@mdounin.ru> Hello! On Wed, Feb 18, 2015 at 09:25:01PM -0500, K Young wrote: > Hello, I've been reading through nginx source to understand the metrics > that are > reported. I've got an idea of what is happening, but since the flow is > asynchronous, I'm not 100% confident, and would love your feedback. > > Is the following correct? > > - 'accepts' is incremented when a worker gets a new request for a > connection from the OS > - 'handled' is incremented when a worker gets a new connection for a > request > > > And then once a connection is opened: > > - 'active' is incremented (actually, it is incremented right before > 'handled', but will be decremented if the worker doesn't handle the > connection request). > - then the connection briefly goes into a waiting state while the > request is waiting to be processed > - then the connection goes into a short reading state while request > headers are read. Simultaneously, 'request' is incremented every time a new > request header begins to be read. > - then the connection goes into a writing state while all work is done > by nginx and by upstream components > - then if the connection will be kept alive, it goes back into waiting > state, which is synonymous with 'idle' > - finally when the connection is closed, active is decremented > > The things I'm least certain of about the 'waiting' state: > > - Does active always sum up to waiting+reading+writing? Generally yes. There are nuances with SPDY where additional connections are emulated for SPDY streams being handled, and waiting+reading+writing may be bigger than active. > - Does each new connection enter a waiting state just before it goes > into the reading state? No. If there are data available right after connect (this usually happens with accept filters / deferred accept), nginx will not mark a newly accepted connection as waiting, and will start reading a request directly. > - While waiting for upstream responses, is the connection in writing > state or waiting state? Writing. > - While waiting for new client requests on an open connection, is the > connection in a waiting state? Yes. > Is the attached sketch of the above connection states correct? (underneath > "READ" it says "request++" to indicate that this is where the request > counter gets incremented). > [image: Inline image 1] Mostly, see above. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Feb 19 13:04:20 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Feb 2015 16:04:20 +0300 Subject: $upstream_addr containing the same ip twice In-Reply-To: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> References: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> Message-ID: <20150219130420.GJ19012@mdounin.ru> Hello! On Wed, Feb 18, 2015 at 09:03:47PM -0600, Kunal Pariani wrote: > Hello, > I have just 1 backend server being reverse-proxied through nginx. The access log lists this one request for which the $upstream_addr has the same ip:port twice. Is this a bug ? > > ::ffff:10.255.255.248:51947 - - [18/Feb/2015:19:52:43 -0600] "GET / HTTP/1.1" 302 454 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; rv:35.0) Gecko/20100101 Firefox/35.0" "a.b.c.d:e, a.b.c.d:e " > > This is how the the log_format is defined > log_format upstream '$remote_addr:$remote_port - $remote_user [$time_local] ' > '"$request" $status $bytes_sent ' > '"$http_referer" "$http_user_agent" "$upstream_addr"'; This can legitimately happen, e.g., if you configure an upstream with multiple servers with the same IP address. -- Maxim Dounin http://nginx.org/ From k at datadoghq.com Thu Feb 19 15:12:02 2015 From: k at datadoghq.com (K Young) Date: Thu, 19 Feb 2015 10:12:02 -0500 Subject: Connection statuses In-Reply-To: <20150219130220.GI19012@mdounin.ru> References: <20150219130220.GI19012@mdounin.ru> Message-ID: Thanks Maxim, that's what I needed to know. How common would you say it is to use accept filters / deferred accept on nginx in production? Also to be perfectly certain: connections.idle in Plus is the same as Waiting in Open, right? Is "Active" on the status demo = read+write? On Thu, Feb 19, 2015 at 8:02 AM, Maxim Dounin wrote: > Hello! > > On Wed, Feb 18, 2015 at 09:25:01PM -0500, K Young wrote: > > > Hello, I've been reading through nginx source to understand the metrics > > that > are > > reported. I've got an idea of what is happening, but since the flow is > > asynchronous, I'm not 100% confident, and would love your feedback. > > > > Is the following correct? > > > > - 'accepts' is incremented when a worker gets a new request for a > > connection from the OS > > - 'handled' is incremented when a worker gets a new connection for a > > request > > > > > > And then once a connection is opened: > > > > - 'active' is incremented (actually, it is incremented right before > > 'handled', but will be decremented if the worker doesn't handle the > > connection request). > > - then the connection briefly goes into a waiting state while the > > request is waiting to be processed > > - then the connection goes into a short reading state while request > > headers are read. Simultaneously, 'request' is incremented every time > a new > > request header begins to be read. > > - then the connection goes into a writing state while all work is done > > by nginx and by upstream components > > - then if the connection will be kept alive, it goes back into waiting > > state, which is synonymous with 'idle' > > - finally when the connection is closed, active is decremented > > > > The things I'm least certain of about the 'waiting' state: > > > > - Does active always sum up to waiting+reading+writing? > > Generally yes. There are nuances with SPDY where additional > connections are emulated for SPDY streams being handled, and > waiting+reading+writing may be bigger than active. > > > - Does each new connection enter a waiting state just before it goes > > into the reading state? > > No. If there are data available right after connect (this usually > happens with accept filters / deferred accept), nginx will not > mark a newly accepted connection as waiting, and will start > reading a request directly. > > > - While waiting for upstream responses, is the connection in writing > > state or waiting state? > > Writing. > > > - While waiting for new client requests on an open connection, is the > > connection in a waiting state? > > Yes. > > > Is the attached sketch of the above connection states correct? > (underneath > > "READ" it says "request++" to indicate that this is where the request > > counter gets incremented). > > [image: Inline image 1] > > Mostly, see above. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 19 15:27:18 2015 From: nginx-forum at nginx.us (Guest13778) Date: Thu, 19 Feb 2015 10:27:18 -0500 Subject: Using Nginx as proxy mail server Message-ID: <1460dfa596d7cccd3716d2f328c446f6.NginxMailingListEnglish@forum.nginx.org> Hello! Would it be possible to set up a Nginx server that will proxy emails to a cPanel server? For example I set up a simple VPS server with IP 10.0.0.1 and then I set my domain's MX record to mail.mydomain.com which points 10.0.0.1 so that tiny VPS server will send all emails to my cPanel server. I know there is mail module for Nginx, but I don't understand it very well. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256800,256800#msg-256800 From vbart at nginx.com Thu Feb 19 15:34:32 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 19 Feb 2015 18:34:32 +0300 Subject: Connection statuses In-Reply-To: References: <20150219130220.GI19012@mdounin.ru> Message-ID: <2380692.JSWjn6Gtfu@vbart-workstation> On Thursday 19 February 2015 10:12:02 K Young wrote: > Thanks Maxim, that's what I needed to know. > > How common would you say it is to use accept filters / deferred accept on > nginx in production? > > Also to be perfectly certain: connections.idle > in Plus is > the same as Waiting > in > Open, right? Yes, right. > > Is "Active" on the status demo = > read+write? No. It's "active" minus "idle" (i.e. waiting). "Read + write" can be bigger than the number of connections in case of SPDY. I prefer calling them as the requests metrics despite of the stub status point of view. This sum actually is requests.current in the status module. http://demo.nginx.com/status/requests/current wbr, Valentin V. Bartenev From kunalvjti at gmail.com Thu Feb 19 16:49:33 2015 From: kunalvjti at gmail.com (Kunal Pariani) Date: Thu, 19 Feb 2015 08:49:33 -0800 Subject: $upstream_addr containing the same ip twice In-Reply-To: <20150219130420.GJ19012@mdounin.ru> References: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> <20150219130420.GJ19012@mdounin.ru> Message-ID: I have just one server configured with a single Ip address but I still see this. Thanks -Kunal On Feb 19, 2015 5:05 AM, "Maxim Dounin" wrote: > Hello! > > On Wed, Feb 18, 2015 at 09:03:47PM -0600, Kunal Pariani wrote: > > > Hello, > > I have just 1 backend server being reverse-proxied through nginx. The > access log lists this one request for which the $upstream_addr has the same > ip:port twice. Is this a bug ? > > > > ::ffff:10.255.255.248:51947 - - [18/Feb/2015:19:52:43 -0600] "GET / > HTTP/1.1" 302 454 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.10; > rv:35.0) Gecko/20100101 Firefox/35.0" "a.b.c.d:e, a.b.c.d:e " > > > > This is how the the log_format is defined > > log_format upstream '$remote_addr:$remote_port - $remote_user > [$time_local] ' > > '"$request" $status $bytes_sent ' > > '"$http_referer" "$http_user_agent" "$upstream_addr"'; > > This can legitimately happen, e.g., if you configure an upstream > with multiple servers with the same IP address. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Feb 19 17:23:16 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Feb 2015 20:23:16 +0300 Subject: $upstream_addr containing the same ip twice In-Reply-To: References: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> <20150219130420.GJ19012@mdounin.ru> Message-ID: <20150219172315.GL19012@mdounin.ru> Hello! On Thu, Feb 19, 2015 at 08:49:33AM -0800, Kunal Pariani wrote: > I have just one server configured with a single Ip address but I still see > this. The example case mentioned isn't the only case when this can happen legitimately. -- Maxim Dounin http://nginx.org/ From ar at xlrs.de Thu Feb 19 18:05:09 2015 From: ar at xlrs.de (Axel) Date: Thu, 19 Feb 2015 19:05:09 +0100 Subject: Using Nginx as proxy mail server In-Reply-To: <1460dfa596d7cccd3716d2f328c446f6.NginxMailingListEnglish@forum.nginx.org> References: <1460dfa596d7cccd3716d2f328c446f6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1657197.07f4eqkbBY@lxrs> Hello, why do you want to use nginx for that? I'd suggest to use postfix with a minimal configuration. That's what postfix is designed to do and there are some howtos. regards, Axel Am Donnerstag, 19. Februar 2015, 10:27:18 schrieb Guest13778: > Hello! > > Would it be possible to set up a Nginx server that will proxy emails to a > cPanel server? > > For example I set up a simple VPS server with IP 10.0.0.1 and then I set my > domain's MX record to mail.mydomain.com which points 10.0.0.1 so that tiny > VPS server will send all emails to my cPanel server. > > I know there is mail module for Nginx, but I don't understand it very well. > > Thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,256800,256800#msg-256800 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From kpariani at zimbra.com Thu Feb 19 18:11:58 2015 From: kpariani at zimbra.com (Kunal Pariani) Date: Thu, 19 Feb 2015 12:11:58 -0600 (CST) Subject: $upstream_addr containing the same ip twice In-Reply-To: <20150219172315.GL19012@mdounin.ru> References: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> <20150219130420.GJ19012@mdounin.ru> <20150219172315.GL19012@mdounin.ru> Message-ID: <542203568.676054.1424369518057.JavaMail.zimbra@zimbra.com> Can you please tell me other such cases where this could happen ? Can't think of any in my scenario atleast. Thanks -Kunal ----- Original Message ----- From: "Maxim Dounin" To: nginx at nginx.org Sent: Thursday, February 19, 2015 9:23:16 AM Subject: Re: $upstream_addr containing the same ip twice Hello! On Thu, Feb 19, 2015 at 08:49:33AM -0800, Kunal Pariani wrote: > I have just one server configured with a single Ip address but I still see > this. The example case mentioned isn't the only case when this can happen legitimately. -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Thu Feb 19 18:30:13 2015 From: nginx-forum at nginx.us (Guest13778) Date: Thu, 19 Feb 2015 13:30:13 -0500 Subject: Using Nginx as proxy mail server In-Reply-To: <1657197.07f4eqkbBY@lxrs> References: <1657197.07f4eqkbBY@lxrs> Message-ID: <241e958c137102495ffba4fcae849608.NginxMailingListEnglish@forum.nginx.org> Hey! That's a good point. Yes I read about postfix too, but I wasn't very sure, but thank you for that idea. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256800,256812#msg-256812 From mdounin at mdounin.ru Thu Feb 19 18:43:22 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Feb 2015 21:43:22 +0300 Subject: $upstream_addr containing the same ip twice In-Reply-To: <542203568.676054.1424369518057.JavaMail.zimbra@zimbra.com> References: <1392014776.376714.1424315027632.JavaMail.zimbra@zimbra.com> <20150219130420.GJ19012@mdounin.ru> <20150219172315.GL19012@mdounin.ru> <542203568.676054.1424369518057.JavaMail.zimbra@zimbra.com> Message-ID: <20150219184322.GM19012@mdounin.ru> Hello! On Thu, Feb 19, 2015 at 12:11:58PM -0600, Kunal Pariani wrote: > Can you please tell me other such cases where this could happen ? E.g., this can also happen when using an upstream{} with keepalive connections cache configured. > Can't think of any in my scenario atleast. It's only who knows your scenario. If you think this is not legitimate in your case, you may consider providing the information outlined here: http://wiki.nginx.org/Debugging#Asking_for_help -- Maxim Dounin http://nginx.org/ From matthew.phelan at gawker.com Thu Feb 19 18:54:48 2015 From: matthew.phelan at gawker.com (Matthew Phelan) Date: Thu, 19 Feb 2015 13:54:48 -0500 Subject: Reporter looking to talk about the Silk Road Case / Special Agent Chris Tarbell In-Reply-To: References: Message-ID: Hey, all. Firstly, I want to apologize, if anyone finds me trawling for expert opinions on this list, in any way, irritating. Secondly, I am still hoping to find someone with thorough knowledge of nginx who might be able to speak this debate about the Icelandic server in the Silk Road trial. Just keeping this thread alive, in case just such a someone turns up. Warm Regards, Sincerely, Matthew On Fri, Feb 13, 2015 at 4:21 PM, Matthew Phelan wrote: > Thanks for the interest, B.R. > > > > *---The only way to understand how the backend server behaves is to see > its whole configuration, namely 'Exhibit 6' which I cannot seem to find. * > /* Do you have a direct link to it?* > > Sadly, no. Here , you will find a torrent > to "all the evidentiary exhibits" introduced during the trial of Ross > Ulbricht . Exhibit 6 should be in that torrent > somewhere. > > > *It would also be interesting to know where the agent attempted to connect > from. If he already had access to the front-end server through > comprimission, he could then initiate connections from there successfully.* > > *Is it said he managed to connect to that backend directly from outside > the infrastructure?* > I may be wrong, but my recollection is that "Yes" it has been said that > Tarbell managed to connect from outside the infrastructure. This is perhaps > why certain commentators have found the Tarbell declaration implausible. > > --- > > Best, > Matthew > > On Fri, Feb 13, 2015 at 4:05 PM, B.R. wrote: > >> Partial information = partial answer. >> >> I do not know the case so maybe questions I will ask have obvious answers. >> >> The only way to understand how the backend server behaves is to see its >> whole configuration, namely 'Exhibit 6' which I cannot seem to find. >> Do you have a direct link to it? >> >> It would also be interesting to know where the agent attempted to connect >> *from*. If he already had access to the front-end server through >> comprimission, he could then initiate connections from there successfully. >> Is it said he managed to connect to that backend directly from outside >> the infrastructure? That looks improbable to me since I consider people >> behind such activities hiding on Tor network know what they are doing and >> are most probably paranoid. >> --- >> *B. R.* >> >> On Fri, Feb 13, 2015 at 8:34 PM, Matthew Phelan < >> matthew.phelan at gawker.com> wrote: >> >>> Hey all, esteemed members of this Nginx mailing list. >>> >>> >>> I'm a freelance reporter (former Onion headline writer and former >>> chemical engineer) trying to gather some kind of technical consensus on a >>> part of the Silk Road pretrial that seems to have become mired in needless >>> ambiguity. Specifically, the prosecution's explanation for how they were >>> able to locate the Silk Road's Icelandic server IP address. >>> >>> You may have seen Australian hacker Nik Cubrilovic's long piece >>> on >>> how it, at least, appears that the government has submitted a deeply >>> implausible scenario for how they came to locate the Silk Road server. Or Bruce >>> Scheiener's comments >>> . >>> Or someone else's. (The court records are hyperlinked in the article, but >>> they can be found here >>> >>> and here >>> , >>> if you'd rather peruse them without Nik's logic prejudicing your own >>> opinion. In addition, here >>> 's >>> the opinion of defendant Ross Ulbricht's lawyer Josh Horowitz, himself a >>> technical expert in this field, wherein he echoes Nik Cubrilovic's critical >>> interpretation of the state's discovery disclosures.) >>> >>> I'm hoping that your collective area of expertise in Nginx might allow >>> some of you to comment on this portion of the case, ideally on the record, >>> for an article I'm working on. >>> >>> My goal is to amass many expert opinions on this. It seems like a very >>> open and shut case that beat reporters covering it last October gave a >>> little too much "He said. She said."-style false equivalency. >>> >>> I know this is a cold call. PLEASED TO MEET YOU! >>> >>> *Here, below, is the main question, I believe:* >>> >>> This portion of the defense's expert criticism >>> >>> of the prosecution's testimony from former SA Chris Tarbell >>> >>> (at least) appears the most clear cut and definitive: >>> >>> ? 7. Without identification by the Government, it was impossible to >>> pinpoint the 19 lines in the access logs showing the date and time of law >>> enforcement access to the .49 server. >>> >>> 23. The ?live-ssl? configuration controls access to the market data >>> contained on the .49 server. This is evident from the configuration line: >>> root /var/www/market/public >>> which tells the Nginx web server that the folder ?public? contains the >>> website content to load when visitors access the site. >>> >>> 24. The critical configuration lines from the live-ssl file are: >>> allow 127.0.0.1; >>> allow 62.75.246.20; >>> deny all; >>> These lines tell the web server to allow access from IP addresses >>> 127.0.0.1 and 65.75.246.20, and to deny all other IP addresses from >>> connecting to the web server. IP address 127.0.0.1 is commonly referred to >>> in computer networking as ?localhost? i.e., the machine itself, which would >>> allow the server to connect to itself. 65.75.246.20, as discussed ante, is >>> the IP address for the front-end server, which must be permitted to access >>> the back-end server. The ?deny all? line tells the web server to deny >>> connections from any IP address for which there is no specific exception >>> provided. >>> >>> 25. Based on this configuration, it would have been impossible for >>> Special Agent Tarbell to access the portion of the .49 server containing >>> the Silk Road market data, including a portion of the login page, simply by >>> entering the IP address of the server in his browser. >>> >>> Does it seem like the defense is making a reasonably sound argument >>> here? Are there any glaring holes in their reasoning to you? Etc.? (I would >>> gladly rather have an answer to this that is filled with qualifiers and >>> hedges than no answer at all, and as such, hereby promise that I will >>> felicitously include all those qualifiers and hedges when quoting.) >>> >>> Any other observations on this pre-trail debate would also be welcome. >>> >>> Thanks for your time, very, very, sincerely. >>> >>> Best Regards, >>> Matthew >>> -- >>> >>> *Matthew D. Phelan* >>> "editorial contractor" >>> >>> *Black Bag ? Gawker * >>> @CBMDP // twitter >>> 917.859.1266 // cellular telephone >>> matthew.phelan at gawker.com // PGP Public Key >>> // >>> email >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 19 19:24:53 2015 From: nginx-forum at nginx.us (noahlh) Date: Thu, 19 Feb 2015 14:24:53 -0500 Subject: Request for thoughts / feedback: Guide on Nginx Monitoring In-Reply-To: <45aaf3d07fe780a1222edb5eb529c81f.NginxMailingListEnglish@forum.nginx.org> References: <45aaf3d07fe780a1222edb5eb529c81f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2771cfb12ca757956edea79219b23fa5.NginxMailingListEnglish@forum.nginx.org> Thanks for the kind words mex -- also thank you for the nagios plugin info -- adding that to my monitoring toolbelt. Some more specific questions for you and others re: the guide I wrote: - Are there any "essential" metrics I'm missing? I listed the 14 that I think are the most critical. - Are there any Nginx features related to monitoring that I haven't covered (and should cover)? - I want to make sure I also have good coverage for monitoring potential "breaking" issues with Nginx -- things that can overflow / top out. I've got open file handles as one of them, and the standard server-related stuff (cpu, memory, disk, bandwidth). Anything Nginx-specific I'm missing that are critical and could cause the application to sputter? Many continued thanks. --Noah Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256634,256817#msg-256817 From k at datadoghq.com Thu Feb 19 20:46:46 2015 From: k at datadoghq.com (K Young) Date: Thu, 19 Feb 2015 15:46:46 -0500 Subject: Connection statuses In-Reply-To: <2380692.JSWjn6Gtfu@vbart-workstation> References: <20150219130220.GI19012@mdounin.ru> <2380692.JSWjn6Gtfu@vbart-workstation> Message-ID: In nginx Open, it looks like when a connection can't be assigned to a request, it is dropped and not enqueued for later processing. Is this correct? If so is the number of dropped connections equal to the difference between active and handled? On Thu, Feb 19, 2015 at 10:34 AM, Valentin V. Bartenev wrote: > On Thursday 19 February 2015 10:12:02 K Young wrote: > > Thanks Maxim, that's what I needed to know. > > > > How common would you say it is to use accept filters / deferred accept on > > nginx in production? > > > > Also to be perfectly certain: connections.idle > > in > Plus is > > the same as Waiting > > in > > Open, right? > > Yes, right. > > > > > > Is "Active" on the status demo = > > read+write? > > No. It's "active" minus "idle" (i.e. waiting). > > "Read + write" can be bigger than the number of connections in case of > SPDY. > I prefer calling them as the requests metrics despite of the stub status > point of view. This sum actually is requests.current in the status module. > > http://demo.nginx.com/status/requests/current > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 19 20:51:25 2015 From: nginx-forum at nginx.us (giumbai) Date: Thu, 19 Feb 2015 15:51:25 -0500 Subject: Request entry too large! Message-ID: I have Owncloud installed behind Nginx reverse proxy. The comfiguration is: upstream hcloud { server 10.1.2.10:80 fail_timeout=0; } #server { # listen 80; # return 301 https://$host$request_uri; #} server { listen 443; server_name hcloud.domain.tld; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_redirect http:// https://; proxy_pass http://hcloud; } } server { listen 80; server_name hcloud.domain.tld; location / { proxy_pass http://10.1.2.10:80; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 4 2048k; client_max_body_size 2048m; client_body_buffer_size 1024m; } } The error i get is "request entry too large". I read some things on the Internet but it seams that i can't make it work. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256821,256821#msg-256821 From anlu at mixpanel.com Fri Feb 20 03:37:26 2015 From: anlu at mixpanel.com (Anlu Wang) Date: Thu, 19 Feb 2015 19:37:26 -0800 Subject: AES-NI support with nginx Message-ID: Hi, Is AES-NI is enabled in nginx 1.7.10 by default, through the openssl evp library (https://www.openssl.org/docs/crypto/evp.html), or does it have to be enabled somehow? ssl_engine aesni; gives me an error upon testing the configuration. Thanks, Anlu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kurt at x64architecture.com Fri Feb 20 04:59:53 2015 From: kurt at x64architecture.com (Kurt Cancemi) Date: Thu, 19 Feb 2015 23:59:53 -0500 Subject: AES-NI support with nginx In-Reply-To: References: Message-ID: AES-NI is already on, there is no configuration option and it will work as long as your cpu supports it. --- Kurt Cancemi https://www.x64architecture.com On Thu, Feb 19, 2015 at 10:37 PM, Anlu Wang wrote: > Hi, > > Is AES-NI is enabled in nginx 1.7.10 by default, through the openssl evp > library (https://www.openssl.org/docs/crypto/evp.html), or does it have > to be enabled somehow? > > ssl_engine aesni; gives me an error upon testing the configuration. > > Thanks, > Anlu > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Feb 20 12:37:26 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 20 Feb 2015 15:37:26 +0300 Subject: Connection statuses In-Reply-To: References: <2380692.JSWjn6Gtfu@vbart-workstation> Message-ID: <2020421.qgaKaG3EpB@vbart-workstation> On Thursday 19 February 2015 15:46:46 K Young wrote: > In nginx Open, it looks like when a connection can't be assigned to a > request, it is dropped and not enqueued for later processing. Is this > correct? I don't understand what you mean by "a connection can't be assigned to a request". > If so is the number of dropped connections equal to the difference > between active and handled? No, it's the difference between accepts and handled connections. This shows how many connections nginx couldn't process due to limitations (e.g. not enough memory or the worker_connections limit was reached). wbr, Valentin V. Bartenev > > On Thu, Feb 19, 2015 at 10:34 AM, Valentin V. Bartenev > wrote: > > > On Thursday 19 February 2015 10:12:02 K Young wrote: > > > Thanks Maxim, that's what I needed to know. > > > > > > How common would you say it is to use accept filters / deferred accept on > > > nginx in production? > > > > > > Also to be perfectly certain: connections.idle > > > in > > Plus is > > > the same as Waiting > > > in > > > Open, right? > > > > Yes, right. > > > > > > > > > > Is "Active" on the status demo = > > > read+write? > > > > No. It's "active" minus "idle" (i.e. waiting). > > > > "Read + write" can be bigger than the number of connections in case of > > SPDY. > > I prefer calling them as the requests metrics despite of the stub status > > point of view. This sum actually is requests.current in the status module. > > > > http://demo.nginx.com/status/requests/current > > > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > From Gurumurthy.Sundar at telventdtn.com Fri Feb 20 19:46:37 2015 From: Gurumurthy.Sundar at telventdtn.com (Gurumurthy Sundar) Date: Fri, 20 Feb 2015 19:46:37 +0000 Subject: NGIX to Apache for Digest Authentication Message-ID: <64FCB5CD9DDEFE4AB99F1D28E132677A0BE8A93E@CORPEXPROD02.dtn.com> I have a NGINX in front of Apache which has both Basic and Digest authentication turned on. I'd like a set up where a user connects to NGINX (using Basic or Digest) and NGINX simply proxy the request to the Apache where the actual authentication happens. I have the Basic case working but not the Digest. Here's how the config for Basic looks like: location /basic { proxy_set_header x-user $http_x_user; proxy_pass http://my.apache.server; // where authentication happens proxy_set_header X-Original-URI $request_uri; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; } Could somebody help me out on how to accomplish the Digest case? NOTICE: This email message is for the sole use of the intended recipient(s) and may contain confidential and privileged information. Any unauthorized use, disclosure or distribution is prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hobson42 at gmail.com Fri Feb 20 20:25:13 2015 From: hobson42 at gmail.com (Ian) Date: Fri, 20 Feb 2015 20:25:13 +0000 Subject: Help required with configuration Message-ID: <54E79829.4@gmail.com> Hi, I thought I knew how to configure an Nginx site, but I've come up with a situation that has defeated me. Ideas needed. We are attempting to move an old site to a new one. I can't load nginx (or anything much) on to the old server because the libraries are so old. For similar reasons we can't move the old site to the new server. Old site was written with front page - long gone. So, I thought to copy the old files to the new server, and test if they exist, and we delete these files as we create the new pages. So the logic is, if the old file exists, pass the request over to the old server. If the old file doesn't exist, we serve the new site. First is a test to see if the file is static. If so, Nginx to serve, if not, then pass it using fcgi to Flask. I have the following config (domain names changed) server { server_name www.example.com example.com; listen 80; # first test if old file exists. location / { root /var/www/oldsite/htdocs; # copy of old site try_files $uri uri/ @newserver; # don't serve locally, but proxy pass to old website proxy_pass http://wwx.example:8080/$uri; # old server on new sub-domain } location @newserver { # old file does not exist, serve from new root /var/www/newsite/htdocs; # serve static files with nginx location ~* \.(gif|jpg|jpeg|css|js)$ { # add other static types here try_files $uri =404; } location / { # anything NOT static (.htm) is passed to flask include uwsgi_params; uwsgi_pass unix:/tmp/uwsgi.sock; } } } However nginx complains that I can't put location inside location @newerver. So how can the site be configured? Thanks Ian -- Ian Hobson Mid Auchentiber, Auchentiber, Kilwinning, North Ayrshire KA13 7RR Tel: 0203 287 1392 Preparing eBooks for Kindle and ePub formats to give the best reader experience. From benfell at mail.parts-unknown.org Sat Feb 21 03:59:25 2015 From: benfell at mail.parts-unknown.org (David Benfell) Date: Fri, 20 Feb 2015 19:59:25 -0800 Subject: pretty URLs in subfolders Message-ID: <20150221035925.GA5326@home.parts-unknown.org> Hi, I'm very new to nginx and I'm having serious problems. I have a number of applications on parts-unknown.org. Drupal, Wordpress, mediawiki (the old wiki), dokuwiki (the new wiki), an owncloud (v7), and a stikked (pastebin) instance. Pretty URLs aren't working at all and the guidance I've found on the web isn't helping. I'm just guessing that the non-pretty URLs don't all use the same PHP parameter, but I don't even know how to find these parameters. I have php-fpm working (as near as I can tell). It appears to work for a GNU Social instance I have; the example configuration, which includes a pretty URL fix similar to what I've seen suggested for everything else that works for it. I'm also completely mystified by the problem of nesting location directives. As near as I can tell nginx just doesn't let you do this. I don't understand what it's attempting to stop me from doing and again, the guidance I've found on the web hasn't helped. A lot of the example configurations I've found, adapted to my context, seem to require nested locations. To me a location within a location should be like a subfolder within a folder. nginx doesn't appear willing to let me do that. So I tried moving the nested location directives out and fully qualifying them. nginx doesn't complain about this, but it also isn't handling the pretty URLs right. Thanks! -- David Benfell See https://parts-unknown.org/node/2 if you don't understand the attachment. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From benfell at mail.parts-unknown.org Sat Feb 21 06:31:26 2015 From: benfell at mail.parts-unknown.org (David Benfell) Date: Fri, 20 Feb 2015 22:31:26 -0800 Subject: How to $POST variables to php-fpm with nginx Message-ID: <20150221063126.GB28670@home.parts-unknown.org> Hi all, In one of my sites, sks.disunitedstates.com, I have a single index page--index.html--which has a number of forms, one of which calls a php script named server-lookup.php. It means to pass a $POST variable to this script but instead, I get "No input file specified." Obviously, the variable isn't getting passed. I saw the answer at http://serverfault.com/questions/231578/nginx-php-fpm-where-are-my-get-params It doesn't work with this situation; I still get the same error. Thanks! -- David Benfell See https://parts-unknown.org/node/2 if you don't understand the attachment. -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: application/pgp-signature Size: 819 bytes Desc: not available URL: From francis at daoine.org Sat Feb 21 11:04:54 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 21 Feb 2015 11:04:54 +0000 Subject: pretty URLs in subfolders In-Reply-To: <20150221035925.GA5326@home.parts-unknown.org> References: <20150221035925.GA5326@home.parts-unknown.org> Message-ID: <20150221110454.GS13461@daoine.org> On Fri, Feb 20, 2015 at 07:59:25PM -0800, David Benfell wrote: Hi there, > I'm also completely mystified by the problem of nesting location > directives. As near as I can tell nginx just doesn't let you do this. > I don't understand what it's attempting to stop me from doing and > again, the guidance I've found on the web hasn't helped. The documentation for the "location" directive is at http://nginx.org/r/location It describes where nesting does not work. Do you have an example where nesting is documented to work, but it does not work for you? There is other useful information linked from http://nginx.org/en/docs/ Can you show one example config that does not do what you want it to do when you make a specific request? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Feb 22 08:30:58 2015 From: nginx-forum at nginx.us (codecowboy) Date: Sun, 22 Feb 2015 03:30:58 -0500 Subject: Redirecting existing links to a sub-folder to a new subdomain Message-ID: <0330befa93ea177ff215c6bccef48f81.NginxMailingListEnglish@forum.nginx.org> If I want to transform links from http://exampledomain.com/sub-folder/some-article to http://subdomain.exampledomain.com/some-article where subdomain has the same name as the subfolder, do I need a rewrite or a redirect? I *think* I need a redirect but want to be sure I am doing it the recommended way. I tried "rewrite /sub-folder-name/(.*) $scheme://sub-folder-name.exampledomain.com/$1 permanent;" But this didn't seem to have any effect after restarting nginx. Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256869,256869#msg-256869 From francis at daoine.org Sun Feb 22 15:47:36 2015 From: francis at daoine.org (Francis Daly) Date: Sun, 22 Feb 2015 15:47:36 +0000 Subject: Redirecting existing links to a sub-folder to a new subdomain In-Reply-To: <0330befa93ea177ff215c6bccef48f81.NginxMailingListEnglish@forum.nginx.org> References: <0330befa93ea177ff215c6bccef48f81.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150222154736.GT13461@daoine.org> On Sun, Feb 22, 2015 at 03:30:58AM -0500, codecowboy wrote: Hi there, > If I want to transform links from > http://exampledomain.com/sub-folder/some-article to > http://subdomain.exampledomain.com/some-article where subdomain has the same > name as the subfolder, do I need a rewrite or a redirect? I *think* I need a > redirect but want to be sure I am doing it the recommended way. You need to issue a http redirect. You can do that using the nginx directive "rewrite" or the nginx directive "redirect" (among other ways). > I tried "rewrite /sub-folder-name/(.*) > $scheme://sub-folder-name.exampledomain.com/$1 permanent;" That can work, depending on where exactly it is written. > But this didn't seem to have any effect after restarting nginx. What do the logs show? "Nothing" suggest one kind of fix; "something" suggests another, depending on what the "something" is. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sun Feb 22 18:27:17 2015 From: nginx-forum at nginx.us (codecowboy) Date: Sun, 22 Feb 2015 13:27:17 -0500 Subject: Redirecting existing links to a sub-folder to a new subdomain In-Reply-To: <20150222154736.GT13461@daoine.org> References: <20150222154736.GT13461@daoine.org> Message-ID: <35cc47ebd1caa419bb3290c21f768ae9.NginxMailingListEnglish@forum.nginx.org> Hi, Thanks for your reply. I got this working. Here's what I ended up with in case it helps someone else: "rewrite /student-area/(.*) $scheme://studentarea.mydomainname.com/$1 permanent;" This was placed outside of any location {} blocks but within the server {} block. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256869,256873#msg-256873 From steve at greengecko.co.nz Mon Feb 23 04:01:45 2015 From: steve at greengecko.co.nz (Steve Holdoway) Date: Mon, 23 Feb 2015 17:01:45 +1300 Subject: How to $POST variables to php-fpm with nginx In-Reply-To: <20150221063126.GB28670@home.parts-unknown.org> References: <20150221063126.GB28670@home.parts-unknown.org> Message-ID: <1424664105.3130.61.camel@steve-new> curl is the simplest... $Request = curl_init(); curl_setopt ( $Request, CURLOPT_URL, "http://api.example.com" ); curl_setopt ( $Request, CURLOPT_ENCODING, 'gzip' ); curl_setopt ( $Request, CURLOPT_RETURNTRANSFER, true ); curl_setopt ( $Request, CURLOPT_POST, true ); $options ["var1"] = "value"; ... curl_setopt ( $Request, CURLOPT_POSTFIELDS, $options ); $Result = curl_exec ( $Request ); curl_close ( $Request ); Should be an ( untested ! ) starting point. Steve On Fri, 2015-02-20 at 22:31 -0800, David Benfell wrote: > Hi all, > > In one of my sites, sks.disunitedstates.com, I have a single index > page--index.html--which has a number of forms, one of which calls a > php script named server-lookup.php. It means to pass a $POST variable > to this script but instead, I get "No input file specified." > Obviously, the variable isn't getting passed. > > I saw the answer at > http://serverfault.com/questions/231578/nginx-php-fpm-where-are-my-get-params > > It doesn't work with this situation; I still get the same error. > > Thanks! > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at nginx.us Mon Feb 23 07:42:54 2015 From: nginx-forum at nginx.us (quadmaniac) Date: Mon, 23 Feb 2015 02:42:54 -0500 Subject: nginx module dev help: nginx http subrequest of type POST (including POST params) Message-ID: <40bb4c2c7d53c01f9294a772f36769ea.NginxMailingListEnglish@forum.nginx.org> Hi nginx experts! This is my first post in this forum - I'm hoping you folks can throw some light on a problem I'm facing. I'm trying to develop a module very similar to the ngx_http_auth_request_module (nginx.org/en/docs/http/ngx_http_auth_request_module.html). While seeing the source code, I realized that this uses ngx_http_subrequest to create a subrequest to a server, and the implementation of ngx_http_subrequest sets the type of request to a GET: sr->method = NGX_HTTP_GET; My use case is this - I need a module that performs authentication based on the subrequest (identical to the module). HOWEVER, the subrequest should include POST params (request body). This is required since the authentication server (that the subrequest points to) will use something like AWS's signing technique (http://docs.aws.amazon.com/general/latest/gr/sigv4_signing.html) to match signatures. This technique hashes the request body and verifies the hash - hence I need the subrequest to include the request body too. Is there any way I can achieve this? I cannot seem to figure out how implement something like ngx_http_subrequest that would include POST params. Help please :) ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256877,256877#msg-256877 From nginx-forum at nginx.us Mon Feb 23 12:27:25 2015 From: nginx-forum at nginx.us (Gona) Date: Mon, 23 Feb 2015 07:27:25 -0500 Subject: Upstream Keepalive connection close In-Reply-To: <20150115170659.GX79857@mdounin.ru> References: <20150115170659.GX79857@mdounin.ru> Message-ID: <79bbafdba039315e0545abc04c55189c.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, If possible I would like to use a keep-alive connection timeout less than that of the backend servers to avoid premature connection close by backend server. As mentioned before, I am trying to avoid other options like rerouting using proxy_next_upstream or retrying by downstream client. Is there a way to set a timeout on keep-alive connections to backend servers? Or even force close or invalidate the connection cache from code? Thanks, Gona Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255966,256878#msg-256878 From nginx-forum at nginx.us Mon Feb 23 18:51:41 2015 From: nginx-forum at nginx.us (dansch8888) Date: Mon, 23 Feb 2015 13:51:41 -0500 Subject: rewrite rules cms phpwcms not working In-Reply-To: <20150218212100.GO13461@daoine.org> References: <20150218212100.GO13461@daoine.org> Message-ID: <3962859cf31fa35951e7089a2caf73ea.NginxMailingListEnglish@forum.nginx.org> Hi Francis, after some testing I use this rules now. These are working fine with my environment. **Nginx Site Config /etc/nginx/sites-available/default :** map $request_uri $bit_of_qs { default ""; ~/(?P.*)\.html $name; } ... server { ... location ^~ /config/phpwcms/ { deny all; } location ^~ /filearchive/ { deny all; } location ^~ /upload/ { deny all; } location ~ /\. { access_log off; log_not_found off; deny all; } location / { try_files $uri @phpwcms; } location @phpwcms { fastcgi_pass unix:/var/run/php5-fpm/default.sock; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root/index.php; fastcgi_param QUERY_STRING $bit_of_qs&$query_string; } location ~* ^.+\.php$ { return 404; } ... } I hope this rules will catch all the following needs. 1. Deny access to folders /config/phpwcms, /filearchive, /upload 2. Deny all hidden files 3. Rewrite /index.php... 4. Ignore and do not show any other php file at root folder or any other sub folder to the internet Is there something that should be improved? One thing that is still happen is the following error message. No idea which "undefined constant Y" means. **Nginx Error Log** [error] 2798#0: *14 FastCGI sent in stderr: "PHP message: PHP Notice: Use of undefined constant Y - assumed 'Y' in /xxx/xxx/xxx/xxx/public_html/include/inc_front/front.func.inc.php(2287) : eval()'d code on line 1" while reading response header from upstream, client: 192.x.x.x, server: hometest.home.local, request: "GET /home_de.html HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm/default.sock:", host: "hometest.home.local", referrer: "https://hometest.home.local/" Thanks Daniel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256693,256879#msg-256879 From nginx-forum at nginx.us Tue Feb 24 15:13:26 2015 From: nginx-forum at nginx.us (mathi) Date: Tue, 24 Feb 2015 10:13:26 -0500 Subject: Required Help on satisfy any setup Message-ID: <956b09846a3a0d74d97062f6ec0d4621.NginxMailingListEnglish@forum.nginx.org> I am using 2 layer NGINX. Fist layer NGINX works as a load balancer and forwards all HTTP traffic another NGINX based on HTTP or HTTPS. When i setup below conditions its not working. location / { satisfy any; allow xx.xx.xx.xx/32; allow yy.yy.yy.yy/32; auth_basic "Restricted" auth_basic_user_file /etc/nginx/password; } When I try accessing my / from IP xx.xx.xx.xx its throwing me the 401 error, This log is not logging on my 2nd layer NGINX, only LB NGINX logging this information. If i change allow to any instead of specific IP things works fine. without 401 error. Some one can suggest me why the alloe not working for the specific IP? I tried both allow method, allow xx.xx.xx.xx and allow xx.xx.xx.xx/32; Any help would be appreciated. thanks, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256888,256888#msg-256888 From nginx-forum at nginx.us Tue Feb 24 15:50:06 2015 From: nginx-forum at nginx.us (mathi) Date: Tue, 24 Feb 2015 10:50:06 -0500 Subject: Required Help on satisfy any setup In-Reply-To: <956b09846a3a0d74d97062f6ec0d4621.NginxMailingListEnglish@forum.nginx.org> References: <956b09846a3a0d74d97062f6ec0d4621.NginxMailingListEnglish@forum.nginx.org> Message-ID: <51ac13be46d254d0b47b3cdd613c6933.NginxMailingListEnglish@forum.nginx.org> I do use following proxy set options. ocation / { proxy_pass http://backend; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_set_header X-Forwarded-Port 80; proxy_set_header Host $host; } In my upstream the backend the my protected second NGINX server. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256888,256890#msg-256890 From francis at daoine.org Tue Feb 24 20:37:58 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Feb 2015 20:37:58 +0000 Subject: rewrite rules cms phpwcms not working In-Reply-To: <3962859cf31fa35951e7089a2caf73ea.NginxMailingListEnglish@forum.nginx.org> References: <20150218212100.GO13461@daoine.org> <3962859cf31fa35951e7089a2caf73ea.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150224203758.GV13461@daoine.org> On Mon, Feb 23, 2015 at 01:51:41PM -0500, dansch8888 wrote: Hi there, > after some testing I use this rules now. These are working fine with my > environment. That's good to hear. > location ^~ /config/phpwcms/ { deny all; } > location ^~ /filearchive/ { deny all; } > location ^~ /upload/ { deny all; } > location ~ /\. { access_log off; log_not_found off; deny all; } > location / { > try_files $uri @phpwcms; > } > location @phpwcms { > fastcgi_pass unix:/var/run/php5-fpm/default.sock; > include fastcgi_params; > fastcgi_param SCRIPT_FILENAME $document_root/index.php; > fastcgi_param QUERY_STRING $bit_of_qs&$query_string; > } > location ~* ^.+\.php$ { return 404; } > I hope this rules will catch all the following needs. > 1. Deny access to folders /config/phpwcms, /filearchive, /upload Yes. > 2. Deny all hidden files Filenames that start with a dot, yes. > 3. Rewrite /index.php... I'm not sure what exactly you mean by that. A request for /file that does not exist will be handled by the fastcgi server processing index.php. > 4. Ignore and do not show any other php file at root folder or any other sub > folder to the internet *any* php request. So if you ask for /index.php directly, you will get 404. > Is there something that should be improved? If it shows what you want and hides what you want, it is probably right. A minor thing is that the ^.+ in the final regex location is probably unnecessary. > One thing that is still happen is the following error message. No idea which > "undefined constant Y" means. > > **Nginx Error Log** > [error] 2798#0: *14 FastCGI sent in stderr: "PHP message: PHP Notice: Use > of undefined constant Y - assumed 'Y' in > /xxx/xxx/xxx/xxx/public_html/include/inc_front/front.func.inc.php(2287) : That sounds like a php error. Does the same thing happen when the application is run in its "native" environment of apache/mod_php? Or in the apache/fastcgi environment? If not, you could investigate the differences. I suspect you're more likely to get useful on the application mailing list. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Feb 24 20:59:24 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 24 Feb 2015 20:59:24 +0000 Subject: Required Help on satisfy any setup In-Reply-To: <956b09846a3a0d74d97062f6ec0d4621.NginxMailingListEnglish@forum.nginx.org> References: <956b09846a3a0d74d97062f6ec0d4621.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150224205924.GW13461@daoine.org> On Tue, Feb 24, 2015 at 10:13:26AM -0500, mathi wrote: Hi there, > I am using 2 layer NGINX. Fist layer NGINX works as a load balancer and > forwards all HTTP traffic another NGINX based on HTTP or HTTPS. > > When i setup below conditions its not working. > > location / { > satisfy any; > allow xx.xx.xx.xx/32; > allow yy.yy.yy.yy/32; > auth_basic "Restricted" Missing semicolon on the previous line. > auth_basic_user_file /etc/nginx/password; > } This can work on the first nginx, because it knows the real client address. If you want it to work on the second nginx, you probably need to use http://nginx.org/en/docs/http/ngx_http_realip_module.html - but I have not tested that myself. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed Feb 25 14:40:21 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 25 Feb 2015 17:40:21 +0300 Subject: Upstream Keepalive connection close In-Reply-To: <79bbafdba039315e0545abc04c55189c.NginxMailingListEnglish@forum.nginx.org> References: <20150115170659.GX79857@mdounin.ru> <79bbafdba039315e0545abc04c55189c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150225144021.GI19012@mdounin.ru> Hello! On Mon, Feb 23, 2015 at 07:27:25AM -0500, Gona wrote: > Hi Maxim, > > If possible I would like to use a keep-alive connection timeout less than > that of the backend servers to avoid premature connection close by backend > server. As mentioned before, I am trying to avoid other options like > rerouting using proxy_next_upstream or retrying by downstream client. > > Is there a way to set a timeout on keep-alive connections to backend > servers? Or even force close or invalidate the connection cache from code? As of now, there is no such option. If the problem you are trying to solve is practical, it would be mostly trivial to add one. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Feb 26 11:14:18 2015 From: nginx-forum at nginx.us (unreal34) Date: Thu, 26 Feb 2015 06:14:18 -0500 Subject: access SSL only with key p12 $ssl_client_verify not works Message-ID: <7aaa9e3507156257362245f96ba1200d.NginxMailingListEnglish@forum.nginx.org> I'm trying to make access SSL only with key p12 you don't have key = access denied Restarting nginx: nginx: [emerg] unknown directive "if($ssl_client_verify" in /etc/nginx/sites-enabled/default:144 nginx: configuration file /etc/nginx/nginx.conf test failed what I'm doing wrong ? server { listen 80; ## listen for ipv4; this line is default and implied root /home/xxx/public_html; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name xxx.com www.xxx.com; set $cache_uri $request_uri; # Make sure files with the following extensions do not get loaded by nginx because nginx would display the source code, and these files can contain PASSWORDS! location ~* \.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ { return 444; } #passwd location /wp-admin/ { auth_basic "Admin area password"; auth_basic_user_file /etc/nginx/htpasswd; } location /wp-login.php { auth_basic "Admin area password"; auth_basic_user_file /etc/nginx/htpasswd; } #nocgi location ~* \.(pl|cgi|py|sh|lua)\$ { return 444; } location ~ /(\.|wp-config.php|readme.html|license.txt) { deny all; } location ~* /(?:|uploads|files)/.*(\.|php|js|html|tpl|sh)$ { deny all; location ~ ^/wp-content/cache/minify/[^/]+/(.*)$ { try_files $uri /wp-content/plugins/w3-total-cache/pub/minify.php?file=$1; } location / { try_files /wp-content/cache/page_enhanced/${host}${cache_uri}_index.html $uri $uri/ /index.php?$args ; } # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $cache_uri 'null cache'; } if ($query_string != "") { set $cache_uri 'null cache'; } # Don't cache uris containing the following segments if ($request_uri ~* "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") { set $cache_uri 'null cache'; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { set $cache_uri 'null cache'; } rewrite ^(.*)?/?files/(.*) /wp-content/blogs.php?file=$2; if (!-e $request_filename) { rewrite ^([_0-9a-zA-Z-]+)?(/wp-.*) $2 break; rewrite ^([_0-9a-zA-Z-]+)?(/.*\.php)$ $2 last; rewrite ^ /index.php last; } rewrite ^/sitemap_index\.xml$ /index.php?sitemap=1 last; rewrite ^/([^/]+?)-sitemap([0-9]+)?\.xml$ /index.php?sitemap=$1&sitemap_n=$2 last; location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini # # # With php5-cgi alone: # fastcgi_pass 127.0.0.1:9000; # # With php5-fpm: fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include fastcgi_params; include fastcgi_params; } } server { listen 443 ; ssl on; server_name xxx.com www.xxx.com; root /home/xxx/public_html; ssl_certificate /etc/nginx/certs/server.crt; ssl_certificate_key /etc/nginx/certs/server.key; ssl_client_certificate /etc/nginx/certs/ca.crt; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; ssl_verify_client on; # ssl_session_cache shared:SSL:10m; # ssl_session_timeout 5m; ssl_verify_depth 1; #location ~* { if($ssl_client_verify != SUCCESS) ## NOT WORKS { return 403; } #} location / { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; #fastcgi_param SCRIPT_FILENAME /home/xxx/public_html/wp-login.php; fastcgi_param VERIFIED $ssl_client_verify; fastcgi_param DN $ssl_client_s_dn; include fastcgi_params; } } sorry for my english. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256931,256931#msg-256931 From dp at nginx.com Thu Feb 26 11:44:22 2015 From: dp at nginx.com (Dmitry Pryadko) Date: Thu, 26 Feb 2015 14:44:22 +0300 Subject: access SSL only with key p12 $ssl_client_verify not works In-Reply-To: <7aaa9e3507156257362245f96ba1200d.NginxMailingListEnglish@forum.nginx.org> References: <7aaa9e3507156257362245f96ba1200d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7265A1BF-3F80-4075-A03E-7EE48478696D@nginx.com> You should place a whitespace between if and opening bracket -if($ssl_client_verify +if ($ssl_client_verify -- br, Dmitry Pryadko > 26 ????. 2015 ?., ? 14:14, unreal34 ???????(?): > > I'm trying to make access SSL only with key p12 > you don't have key = access denied > > > Restarting nginx: nginx: [emerg] unknown directive "if($ssl_client_verify" > in /etc/nginx/sites-enabled/default:144 > nginx: configuration file /etc/nginx/nginx.conf test failed > > > what I'm doing wrong ? > > > server { > listen 80; ## listen for ipv4; this line is default and implied > > root /home/xxx/public_html; > index index.php index.html index.htm; > > # Make site accessible from http://localhost/ > server_name xxx.com www.xxx.com; > > set $cache_uri $request_uri; > > # Make sure files with the following extensions do not get loaded by > nginx because nginx would display the source code, and these files can > contain PASSWORDS! > location ~* > \.(engine|inc|info|install|make|module|profile|test|po|sh|.*sql|theme|tpl(\.php)?|xtmpl)$|^(\..*|Entries.*|Repository|Root|Tag|Template)$|\.php_ > { > return 444; > } > #passwd > location /wp-admin/ { > auth_basic "Admin area password"; > auth_basic_user_file /etc/nginx/htpasswd; > } > location /wp-login.php { > auth_basic "Admin area password"; > auth_basic_user_file /etc/nginx/htpasswd; > } > > #nocgi > location ~* \.(pl|cgi|py|sh|lua)\$ { > return 444; > } > > location ~ /(\.|wp-config.php|readme.html|license.txt) { deny all; } > > location ~* /(?:|uploads|files)/.*(\.|php|js|html|tpl|sh)$ { > deny all; > location ~ ^/wp-content/cache/minify/[^/]+/(.*)$ { > try_files $uri > /wp-content/plugins/w3-total-cache/pub/minify.php?file=$1; > } > location / { > try_files > /wp-content/cache/page_enhanced/${host}${cache_uri}_index.html $uri $uri/ > /index.php?$args ; > } > # POST requests and urls with a query string should always go to PHP > if ($request_method = POST) { > set $cache_uri 'null cache'; > } > if ($query_string != "") { > set $cache_uri 'null cache'; > } > # Don't cache uris containing the following segments > if ($request_uri ~* > "(/wp-admin/|/xmlrpc.php|/wp-(app|cron|login|register|mail).php|wp-.*.php|/feed/|index.php|wp-comments-popup.php|wp-links-opml.php|wp-locations.php|sitemap(_index)?.xml|[a-z0-9_-]+-sitemap([0-9]+)?.xml)") > { > set $cache_uri 'null cache'; > } > # Don't use the cache for logged in users or recent commenters > if ($http_cookie ~* > "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_logged_in") { > set $cache_uri 'null cache'; > } > rewrite ^(.*)?/?files/(.*) /wp-content/blogs.php?file=$2; > if (!-e $request_filename) { > rewrite ^([_0-9a-zA-Z-]+)?(/wp-.*) $2 break; > rewrite ^([_0-9a-zA-Z-]+)?(/.*\.php)$ $2 last; > rewrite ^ /index.php last; > } > rewrite ^/sitemap_index\.xml$ /index.php?sitemap=1 last; > rewrite ^/([^/]+?)-sitemap([0-9]+)?\.xml$ /index.php?sitemap=$1&sitemap_n=$2 > last; > > > > > location ~ \.php$ { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > # # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini > # > # # With php5-cgi alone: > # fastcgi_pass 127.0.0.1:9000; > # # With php5-fpm: > fastcgi_pass unix:/var/run/php5-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > include fastcgi_params; > } > } > > > > > server { > listen 443 ; > ssl on; > server_name xxx.com www.xxx.com; > root /home/xxx/public_html; > ssl_certificate /etc/nginx/certs/server.crt; > ssl_certificate_key /etc/nginx/certs/server.key; > ssl_client_certificate /etc/nginx/certs/ca.crt; > ssl_ciphers RC4:HIGH:!aNULL:!MD5; > ssl_prefer_server_ciphers on; > ssl_verify_client on; > # ssl_session_cache shared:SSL:10m; > # ssl_session_timeout 5m; > ssl_verify_depth 1; > > > #location ~* { > if($ssl_client_verify != SUCCESS) ## NOT WORKS > { return 403; > } > #} > location / { > fastcgi_split_path_info ^(.+\.php)(/.+)$; > > fastcgi_pass unix:/var/run/php5-fpm.sock; > #fastcgi_param SCRIPT_FILENAME /home/xxx/public_html/wp-login.php; > fastcgi_param VERIFIED $ssl_client_verify; > fastcgi_param DN $ssl_client_s_dn; > include fastcgi_params; > } > > > } > > sorry for my english. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256931,256931#msg-256931 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu Feb 26 12:31:17 2015 From: nginx-forum at nginx.us (unreal34) Date: Thu, 26 Feb 2015 07:31:17 -0500 Subject: access SSL only with key p12 $ssl_client_verify not works In-Reply-To: <7265A1BF-3F80-4075-A03E-7EE48478696D@nginx.com> References: <7265A1BF-3F80-4075-A03E-7EE48478696D@nginx.com> Message-ID: thanks. it works. but not return 403; https:// works I want this : https:// must return 403 p12 + https:// return 200 OK Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256931,256934#msg-256934 From olivier.rossel at gmail.com Thu Feb 26 17:37:34 2015 From: olivier.rossel at gmail.com (Olivier Rossel) Date: Thu, 26 Feb 2015 18:37:34 +0100 Subject: Gzip a PUT request + automatic deflate by ngx_http_dav_module Message-ID: Hi all. I am using Nginx with the ngx_http_dav_module (i.e WebDav). I use the PUT method of Webdav to upload files to my Nginx server. Unfortunately, it is quite slow when files are big, and I would like to enable gzip compression for my PUT uploads. Such a feature is available in the mod_deflate module of Apache: http://httpd.apache.org/docs/2.2/mod/mod_deflate.html#input I am wondering if such a feature is also available in the ngx_http_dav_module. Any help (or other option to fasten my uploads) is welcome. From sarah at nginx.com Fri Feb 27 05:46:05 2015 From: sarah at nginx.com (Sarah Novotny) Date: Thu, 26 Feb 2015 21:46:05 -0800 Subject: The Future of Open Source Survey Message-ID: <429B870F-746B-44CF-BA1B-3965A11631A2@nginx.com> TL;DR Participate in the Future of Open Source! Complete this survey today. https://www.surveymonkey.com/s/FoOS-Nginx In long form, NGINX is a sponsor of the Blackduck Future of Open Source Survey. This is an annual survey and your voice would mean a lot. We have a longer summary of our participation along with links to last year?s results from the survey here - http://nginx.com/blog/nginx-supports-9th-annual-future-open-source-survey/ Thanks! Sarah And, Here are some suggested tweets for those of you who are super excited and like to tweet things... Love open source technology? Show your appreciation by participating in the #FutureOSS Survey: https://www.surveymonkey.com/s/FoOS-Nginx Open source software is eating the world! Where do you think it?s having the biggest impact? https://www.surveymonkey.com/s/FoOS-Nginx#FutureOSS Think open source provides superior quality and security? Tell us more in this year?s #FutureOSS Survey: https://www.surveymonkey.com/s/FoOS-Nginx What open source project is most valuable to your business? Answer this question & more in the #FutureOSS Survey: https://www.surveymonkey.com/s/FoOS-Nginx Participate in the Future of Open Source! Complete this survey today! https://www.surveymonkey.com/s/FoOS-Nginx From nginx-forum at nginx.us Sat Feb 28 17:51:14 2015 From: nginx-forum at nginx.us (173279834462) Date: Sat, 28 Feb 2015 12:51:14 -0500 Subject: SNI: ssl_error_bad_cert_domain on https:// Message-ID: <18c1c984614c7838aa92c1d5313b3410.NginxMailingListEnglish@forum.nginx.org> premisses ------------- nginx version: nginx/1.7.10 TLS SNI support enabled Serving vhosts each vhost has own registered certificate each vhost works as expected task ----- Obtain 444 from [http|https]://. case http:// -------------------------------------- configuration: server { listen 80; server_name _; root /dev/null; return 444; } It returns 444, and we are happy about it. case https:// --------------------------------------- No additional configuration. It returns the following: < uses an invalid security certificate. < The certificate is only valid for the following names: < < www.example.com example.com < < (Error code: ssl_error_bad_cert_domain) where "example.com" is a random? host from our pool of vhosts, and its registered certificate is served for the IP-ADDRESS by nginx's SNI. Indeed, this is the problem at hand. The following does not help at all, server { #listen 80; listen 443 ssl; ssl_certificate_key /etc/ssl//www.key; ssl_certificate /etc/ssl//www.pem; server_name _; root /dev/null; return 444; } For the sake of proper administration, www.key/pem is a self-signed certificate with included e-mail "hostmaster@", and an e-mail address has been created on purpose. Can you replicate this problem? Are there any known solutions? Thank you for your time. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256957,256957#msg-256957