From aqqa11 at earthlink.net Fri May 1 06:01:12 2015 From: aqqa11 at earthlink.net (John) Date: Fri, 1 May 2015 02:01:12 -0400 (GMT-04:00) Subject: proxy_redirect not working with "refresh" Message-ID: <31341853.1430460073218.JavaMail.root@elwamui-polski.atl.sa.earthlink.net> -----Original Message----- >From: Francis Daly >Sent: Apr 30, 2015 4:11 AM >To: nginx at nginx.org >Subject: Re: proxy_redirect not working with "refresh" > > >Hi there, > >That's not a "Refresh" header field. > >That is something in the http response body. > >In general, nginx doesn't mess with the response body. > >(You can configure it to, but I tend to dislike doing that.) > >> Did I miss anything? Actually I don't understand that line about "proxy_set_header Host $host", I just copied from web. > >Why does your back-end include the string "http://192.168.1.9/" in its >response body? > >Can you make it instead include a string based on the Host: header it >receives? If so, that is what the "proxy_set_header Host $host" is for. > > f >-- Thank you so much. That saved me from the hours of searching. Indeed, the documentation said it's about "proxied server response". I should expect every word in nginx documention counts :-) The backend is an oudated legacy application, don't know why they did that, and I tried but can't figure out how to fix. So I just used the sub_filter to rewrite and it worked. Indeed I still have to "proxy_set_header Host $host" on the proxy to make that work as well. Sincerely, John From nginx-forum at nginx.us Fri May 1 12:09:14 2015 From: nginx-forum at nginx.us (hheiko) Date: Fri, 01 May 2015 08:09:14 -0400 Subject: open socket left in connection In-Reply-To: <39d111e9a1323c3a9424b71d51eb610e.NginxMailingListEnglish@forum.nginx.org> References: <39d111e9a1323c3a9424b71d51eb610e.NginxMailingListEnglish@forum.nginx.org> Message-ID: :-( This problem is related to SPDY and occurs on our system without fastcgi or proxycaching. It's a pity that we cannot use SPDY on our production system... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258312,258558#msg-258558 From nginx-forum at nginx.us Fri May 1 15:59:28 2015 From: nginx-forum at nginx.us (talkingnews) Date: Fri, 01 May 2015 11:59:28 -0400 Subject: Can't find nginx-extras 1.9 mainline in any repo. Message-ID: <52657f50caeab51d008af17fc5ca6471.NginxMailingListEnglish@forum.nginx.org> Just installed Ubuntu 15.04 and rather surprised to see it's still giving nginx-extras (1.6.2-5ubuntu3) So I added the recommended official repo as per http://nginx.org/en/linux_packages.html#mainline deb http://nginx.org/packages/mainline/ubuntu/ vivid nginx except, vivid gave a "not found" so I tried deb http://nginx.org/packages/mainline/ubuntu/ utopic nginx which happily offered 1.9, but nginx-extras was still stuck at 1.6. I guess extras is not an official repo thing? So, I tried the alternative sudo add-apt-repository ppa:nginx/development but that is only offering 1.7.12-1+vivid0 Is it just a case of "sit and wait for them to catch up", or is there a more up-to-date repo of the extras build? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258559,258559#msg-258559 From larry.martell at gmail.com Fri May 1 16:04:51 2015 From: larry.martell at gmail.com (Larry Martell) Date: Fri, 1 May 2015 12:04:51 -0400 Subject: error logging with nginx and uWSGI Message-ID: Prior to now, all the django projects I've worked on have used apache and WSGI. With those, when an error occurred I went to /var/log/httpd/error_log and details of the error were clearly there. Now for the first time I am working on a project using nginx and uWSGI. Here, the /var/log/nginx/error_log is always empty. And the uWSGI logs have some messages, but nothing about any errors. In this setup where would I go to find the errors that /var/log/httpd/error_log logs? Is there some config setting that is suppressing the nginx errors? From andrew.stuart at supercoders.com.au Fri May 1 20:40:07 2015 From: andrew.stuart at supercoders.com.au (Andrew Stuart) Date: Sat, 2 May 2015 06:40:07 +1000 Subject: Is there any way to load SSL certificates from a URL? Message-ID: <60FEA676-3519-4A30-831B-E0716DCE46E8@supercoders.com.au> When nginx starts I want to load the SSL certificates via URL rather than file system. This is because the machine that I am running from does not have a usable file system, and also because I need to fetch the certificates at the time that nginx is started. Is there any way to do this? thanks as From nginx-forum at nginx.us Fri May 1 22:11:28 2015 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 01 May 2015 18:11:28 -0400 Subject: Is there any way to load SSL certificates from a URL? In-Reply-To: <60FEA676-3519-4A30-831B-E0716DCE46E8@supercoders.com.au> References: <60FEA676-3519-4A30-831B-E0716DCE46E8@supercoders.com.au> Message-ID: <600e05b705379e7d52794ef31dfb15ac.NginxMailingListEnglish@forum.nginx.org> What about using curl to fetch it and store it on a ramdisk/swapdrive or where you keep the .conf files? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258564,258566#msg-258566 From andrew.stuart at supercoders.com.au Fri May 1 22:43:25 2015 From: andrew.stuart at supercoders.com.au (Andrew Stuart) Date: Sat, 2 May 2015 08:43:25 +1000 Subject: Is there any way to load SSL certificates from a URL? In-Reply-To: <600e05b705379e7d52794ef31dfb15ac.NginxMailingListEnglish@forum.nginx.org> References: <60FEA676-3519-4A30-831B-E0716DCE46E8@supercoders.com.au> <600e05b705379e7d52794ef31dfb15ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7651FC0F-3923-49F4-BDB9-975588559C69@supercoders.com.au> Its an embedded system so I don?t have the chance to run anything at boot time/startup - it just goies straight into nginx. On 2 May 2015, at 8:11 am, itpp2012 wrote: What about using curl to fetch it and store it on a ramdisk/swapdrive or where you keep the .conf files? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258564,258566#msg-258566 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From tfransosi at gmail.com Fri May 1 22:44:20 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Fri, 1 May 2015 19:44:20 -0300 Subject: Help with nginx downloads folder? In-Reply-To: <9FD84190-098E-40B2-B84E-9F55748EEE7D@me.com> References: <9FD84190-098E-40B2-B84E-9F55748EEE7D@me.com> Message-ID: On Fri, May 1, 2015 at 3:47 PM, HENRY CLINE wrote: > Yes all ports are open and listening? > That is not necessary the same. Do you have it on your nginx.conf? What your root looks like? If you can provide more information, that can help us diagnose the problem and shed some light on it perhaps. -- Thiago Farina From andrew.stuart at supercoders.com.au Fri May 1 22:50:40 2015 From: andrew.stuart at supercoders.com.au (Andrew Stuart) Date: Sat, 2 May 2015 08:50:40 +1000 Subject: Is there any way to load SSL certificates from a URL? In-Reply-To: <600e05b705379e7d52794ef31dfb15ac.NginxMailingListEnglish@forum.nginx.org> References: <60FEA676-3519-4A30-831B-E0716DCE46E8@supercoders.com.au> <600e05b705379e7d52794ef31dfb15ac.NginxMailingListEnglish@forum.nginx.org> Message-ID: Might it be possible to load SSL certificates from a URL using Lua? On 2 May 2015, at 8:11 am, itpp2012 wrote: What about using curl to fetch it and store it on a ramdisk/swapdrive or where you keep the .conf files? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258564,258566#msg-258566 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat May 2 00:50:51 2015 From: nginx-forum at nginx.us (gariac) Date: Fri, 01 May 2015 20:50:51 -0400 Subject: Installed nginx with iredmail; how to add web content & test without DNS change In-Reply-To: <20150430175016.GI29618@daoine.org> References: <20150430175016.GI29618@daoine.org> Message-ID: <937d5d3c48e9928d1500488340865c39.NginxMailingListEnglish@forum.nginx.org> I think I failed to explain my problem correctly. It seems to me whatever I do for the test would be on the server side, not client side. Studying your reply, I think your solution maps domain.com to ipaddress (dotted quad) from the client side. What I need to do is have the one ip address I have for the server host iredmail and my website. Iredmail creates an index.html in /var/www with contents I created a /var/www2 with a different index.html, but I don't see how nginx would know which directory to use. I was able to get nginx working with my website prior to having iredmail on the same server. Having both services on the same server is what confuses me. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258551,258570#msg-258570 From nginx-forum at nginx.us Sat May 2 03:06:01 2015 From: nginx-forum at nginx.us (bughunter) Date: Fri, 01 May 2015 23:06:01 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150408152751.GI88631@mdounin.ru> References: <20150408152751.GI88631@mdounin.ru> Message-ID: Finally had some time to construct an extremely basic server configuration with a default HTTP and HTTPS server and test it. I'm working on a production server, so there are quite a few requests every second and therefore the downtime had to be scheduled into a tiny window of opportunity. I also temporarily compiled and enabled a debug build for a few minutes (the log file went nuts). I had ssl_stapling on and no verification. There was still no OCSP stapling response data or anything related to OCSP in the debug logs. Based on numroo's earlier response and since I was also able to fiddle around with the config in production, I decided to temporarly disable the default SSL server with the self-signed cert. After reloading the config, bam! Instantly OCSP stapling started working as expected (even with verification turned on). Re-enabling the default SSL server with the self-signed cert caused OCSP to stop working again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,258571#msg-258571 From alec.taylor6 at gmail.com Sat May 2 03:51:20 2015 From: alec.taylor6 at gmail.com (Alec Taylor) Date: Sat, 2 May 2015 13:51:20 +1000 Subject: Windows build failing - NMAKE : fatal error U1073 Message-ID: Following your guide: http://nginx.org/en/docs/howto_build_on_win32.html With a few edits, I then ran: ./configure --with-cc=cl --builddir=objs --prefix= --conf-path=conf/nginx.conf --pid-path=logs/nginx.pid --http-log-path=logs/access.log --error-log-path=logs/error.log --sbin-path=nginx.exe --http-client-body-temp-path=temp/client_body_temp --http-proxy-temp-path=temp/proxy_temp --http-fastcgi-temp-path=temp/fastcgi_temp --with-cc-opt=-DFD_SETSIZE=1024 --with-pcre=objs/lib/pcre-8.36 --with-zlib=objs/lib/zlib-1.2.8 --with-openssl=objs/lib/openssl-1.0.1e --with-select_module --with-http_ssl_module --with-ipv6 --add-module=../ngx-fancyindex Which succeeded. Finally I ran: `nmake -f objs/Makefile`, which gave the following output: Microsoft (R) Program Maintenance Utility Version 12.00.21005.1 Copyright (C) Microsoft Corporation. All rights reserved. NMAKE : fatal error U1073: don't know how to make 'src/os/win32/ngx_win32_config .h' Stop. How do I build nginx on Windows? Thanks for all suggestions From umarzuki at gmail.com Sat May 2 05:49:16 2015 From: umarzuki at gmail.com (Umarzuki Mochlis) Date: Sat, 2 May 2015 13:49:16 +0800 Subject: Installed nginx with iredmail; how to add web content & test without DNS change In-Reply-To: <937d5d3c48e9928d1500488340865c39.NginxMailingListEnglish@forum.nginx.org> References: <20150430175016.GI29618@daoine.org> <937d5d3c48e9928d1500488340865c39.NginxMailingListEnglish@forum.nginx.org> Message-ID: You need to find how to create virtual host for inginx which involves setting up sites-available/domain file, symlink it to sites-enabled/ and restart nginx. You would also need to edit you host file on your pc to simulate dns resolve for testing purposes. On May 2, 2015 8:51 AM, "gariac" wrote: > I think I failed to explain my problem correctly. It seems to me whatever I > do for the test would be on the server side, not client side. Studying your > reply, I think your solution maps domain.com to ipaddress (dotted quad) > from > the client side. What I need to do is have the one ip address I have for > the > server host iredmail and my website. > > Iredmail creates an index.html in /var/www with contents > > > > > > > > I created a /var/www2 with a different index.html, but I don't see how > nginx > would know which directory to use. > > I was able to get nginx working with my website prior to having iredmail on > the same server. Having both services on the same server is what confuses > me. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258551,258570#msg-258570 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat May 2 07:27:01 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 02 May 2015 03:27:01 -0400 Subject: Is there any way to load SSL certificates from a URL? In-Reply-To: References: Message-ID: <8d4a0c678759bdbf6c6cd0b77f1073e9.NginxMailingListEnglish@forum.nginx.org> You can try Lua (init_by_lua) something like https://github.com/pintsized/lua-resty-http with a bit of tweaking, no usb port you could use with a small usb stick? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258564,258574#msg-258574 From francis at daoine.org Sat May 2 07:54:02 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 2 May 2015 08:54:02 +0100 Subject: Installed nginx with iredmail; how to add web content & test without DNS change In-Reply-To: <937d5d3c48e9928d1500488340865c39.NginxMailingListEnglish@forum.nginx.org> References: <20150430175016.GI29618@daoine.org> <937d5d3c48e9928d1500488340865c39.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150502075402.GJ29618@daoine.org> On Fri, May 01, 2015 at 08:50:51PM -0400, gariac wrote: Hi there, > I think I failed to explain my problem correctly. It seems to me whatever I > do for the test would be on the server side, not client side. Studying your > reply, I think your solution maps domain.com to ipaddress (dotted quad) from > the client side. What I need to do is have the one ip address I have for the > server host iredmail and my website. I had misunderstood your issue. See http://nginx.org/r/server and links. Basically: some web applications can sensibly be installed in a "subfolder" of the web space, so you can access them at http://www.example.com/app1/. Other web applications insist on owning the full web space, so you must use http://app1.example.com/. If iredmail is in the latter group, then in your one nginx conf file, have one server{} block for iredmail, and a separate server{} block (with a different server_name) for your website. > I created a /var/www2 with a different index.html, but I don't see how nginx > would know which directory to use. If you have separate server{} blocks, nginx picks the one to use based on its rules. > I was able to get nginx working with my website prior to having iredmail on > the same server. Having both services on the same server is what confuses > me. They can be in the same nginx instance without a problem. The can only be in the same nginx server{} block if both play nice. In that case, you must decide how you want nginx to handle the request for "/", and configure it to do that. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Sat May 2 10:32:51 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 2 May 2015 15:32:51 +0500 Subject: Nginx gets halt on 15K connections !! Message-ID: Hi, We've been running nginx-1.8 instance on one of our media server to serve big static .mp4 files as well as small files such as .jpeg. Nginx is serving well under 13K connections/sec with 800Mbps outgoing network load but whenever requests exceed 15K connections, nginx gets halt and 'D' status goes all over around the nginx workers, as well as network load drops down to 400Mbps due to which video streaming gets stuck and after 5-10 minutes load starts dropping and nginx starts stabilizing again as well as network load gets back to 800Mbps. We've been encountering this fluctuating situation on each 15minutes gap (Probably). We know that 'D' status is most likely due to high Disk I/O and to ensure that the disk i/o could be the problem under 15K connections, we had enabled apache on port 8080 for testing same video stream during high load and buffered on apache, well the stream was fluctuating a bit but there was no stuck for around 5-10 minutes. In the meantime the same video was worst on nginx and stucked for 5minutes during buffer. We suspecting this to be related to something else than Disk I/O, reason is the same video under high load buffers better on apache(on port 8080). Also if it is related to high disk I/O, there must be no possibility that video should should stuck for 5-10 minutes. It looks to us that nginx gets halt when concurrent connections exceed 15K. We also tried optimizing backlog directive which slightly improved the performance but there must be something more related to nginx optimization which we must be missing. I have linked nginx.conf file, sysctl and vhost file to get better understanding of our tweaks. user nginx; worker_processes 48; worker_rlimit_nofile 600000; #2 filehandlers for each connection #error_log logs/error.log; #error_log logs/error.log notice; error_log /var/log/nginx/error.log error; #error_log /dev/null; #pid logs/nginx.pid; events { worker_connections 2048; use epoll; # use kqueue; } http { include mime.types; default_type application/octet-stream; # client_max_body_size 800M; client_body_buffer_size 128K; output_buffers 1 512k; sendfile_max_chunk 128k; client_header_buffer_size 256k; large_client_header_buffers 4 256k; # fastcgi_buffers 512 8k; # proxy_buffers 512 8k; # fastcgi_read_timeout 300s; server_tokens off; #Conceals nginx version access_log off; # access_log /var/log/nginx/access.log; sendfile off; # sendfile ; tcp_nodelay on; aio on; directio 512; # tcp_nopush on; client_header_timeout 120s; client_body_timeout 120s; send_timeout 120s; keepalive_timeout 15; gzip on; gzip_vary on; gzip_disable "MSIE [1-6]\."; gzip_proxied any; gzip_http_version 1.0; gzip_min_length 1280; gzip_comp_level 6; gzip_buffers 16 8k; gzip_types text/plain text/xml text/css application/x-javascript image/png image/x-icon image/gif image/jpeg image/jpg application/xml application/xml+rss text/javascr ipt application/atom+xml; include /usr/local/nginx/conf/vhosts/*.conf; # open_file_cache max=2000 inactive=20s; # open_file_cache_valid 60s; # open_file_cache_min_uses 5; # open_file_cache_errors off; } sysctl.conf main config : fs.file-max = 700000 net.core.wmem_max=6291456 net.core.rmem_max=6291456 net.ipv4.tcp_rmem= 10240 87380 6291456 net.ipv4.tcp_wmem= 10240 87380 6291456 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_timestamps = 1 net.ipv4.tcp_sack = 1 net.ipv4.tcp_no_metrics_save = 1 net.core.netdev_max_backlog = 10000 net.ipv6.conf.all.disable_ipv6 = 1 net.ipv6.conf.default.disable_ipv6 = 1 net.ipv6.conf.lo.disable_ipv6 = 1 net.ipv6.conf.eth0.disable_ipv6 = 1 net.ipv6.conf.eth1.disable_ipv6 = 1 net.ipv6.conf.ppp0.disable_ipv6 = 1 net.ipv6.conf.tun0.disable_ipv6 = 1 vm.dirty_background_ratio = 50 vm.dirty_ratio = 80 net.ipv4.tcp_fin_timeout = 30 net.ipv4.ip_local_port_range=1024 65000 net.ipv4.tcp_tw_reuse = 1 net.netfilter.nf_conntrack_tcp_timeout_established = 54000 net.ipv4.netfilter.ip_conntrack_generic_timeout = 120 net.ipv4.tcp_syn_retries=2 net.ipv4.tcp_synack_retries=2 net.ipv4.netfilter.ip_conntrack_max = 90536 net.core.somaxconn = 10000 Vhost : server { listen 80 backlog=10000; server_name archive3.domain.com archive3.domain.com www.archive3.domain.com www.archive3.domain.com; access_log off; location / { root /content/archive; index index.html index.htm index.php; autoindex off; } location /files/thumbs/ { root /data/nginx/archive; add_header X-Cache SSD; expires max; } location ~ \.(flv)$ { flv; root /content/archive; # aio on; # directio 512; # output_buffers 1 2m; expires 7d; valid_referers none blocked domain.com *.domain.com *. facebook.com *.domain.com *.twitter.com *.domain.com *.gear3rd.net domain.com *.domain.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } location ~ \.(mp4)$ { mp4; mp4_buffer_size 4M; mp4_max_buffer_size 10M; expires 7d; root /content/archive; valid_referers none blocked domain.com *.domain.com *. facebook.com *.domain.com *.twitter.com *.domain.com *.gear3rd.net domain.com *.domain.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; if ($invalid_referer) { return 403; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root /content/archive; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_read_timeout 10000; } location ~ /\.ht { deny all; } location ~ ^/(status|ping)$ { access_log off; allow 127.0.0.1; deny all; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; } } Server Specs : L5630 (8cores, 16threads) RAM 64GB 12 x 3TB @ SATA Hardware Raid-6 Here's the screenshot of server load during 15K connections: http://prntscr.com/70l68q Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat May 2 14:13:05 2015 From: francis at daoine.org (Francis Daly) Date: Sat, 2 May 2015 15:13:05 +0100 Subject: Is there any way to load SSL certificates from a URL? In-Reply-To: <7651FC0F-3923-49F4-BDB9-975588559C69@supercoders.com.au> References: <60FEA676-3519-4A30-831B-E0716DCE46E8@supercoders.com.au> <600e05b705379e7d52794ef31dfb15ac.NginxMailingListEnglish@forum.nginx.org> <7651FC0F-3923-49F4-BDB9-975588559C69@supercoders.com.au> Message-ID: <20150502141305.GL29618@daoine.org> On Sat, May 02, 2015 at 08:43:25AM +1000, Andrew Stuart wrote: Hi there, > Its an embedded system so I don?t have the chance to run anything at boot time/startup - it just goies straight into nginx. Stock nginx does not load SSL certificates from anything other than the filesystem. How you can get the effect that you want, depends on what changes you can make on your embedded system. If the nginx-side solution were "include this new module", but you can't replace the nginx binary, then that solution would not be useful for you. If you *can* replace the nginx binary, you could conceivably replace it with a thing which fetches the certificates from the right place and puts them in the right place, and then runs the "real" nginx binary, without needing any changes from stock nginx. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Sat May 2 14:52:31 2015 From: nginx-forum at nginx.us (GuiPoM) Date: Sat, 02 May 2015 10:52:31 -0400 Subject: Connection timeout from work, working anywhere else In-Reply-To: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> References: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have no clue why it is not working. So I took the computer from my office location and I did the test from another location, with a standard internet access: I have no issue at all. It means that only the network between the computer and the nginx server makes this error appears, it must have nothing to do with the computer. I am still hoping to find some help here. I am pretty sure that the is causing this issue, but I have no idea if this is a configuration of the nginx server and if I can do something around that. Please help ! Thx. GuiPoM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258578#msg-258578 From mdounin at mdounin.ru Sat May 2 16:48:07 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 2 May 2015 19:48:07 +0300 Subject: Windows build failing - NMAKE : fatal error U1073 In-Reply-To: References: Message-ID: <20150502164806.GI32429@mdounin.ru> Hello! On Sat, May 02, 2015 at 01:51:20PM +1000, Alec Taylor wrote: > Following your guide: http://nginx.org/en/docs/howto_build_on_win32.html > > With a few edits, I then ran: > ./configure --with-cc=cl --builddir=objs --prefix= > --conf-path=conf/nginx.conf --pid-path=logs/nginx.pid > --http-log-path=logs/access.log --error-log-path=logs/error.log > --sbin-path=nginx.exe > --http-client-body-temp-path=temp/client_body_temp > --http-proxy-temp-path=temp/proxy_temp > --http-fastcgi-temp-path=temp/fastcgi_temp > --with-cc-opt=-DFD_SETSIZE=1024 --with-pcre=objs/lib/pcre-8.36 > --with-zlib=objs/lib/zlib-1.2.8 --with-openssl=objs/lib/openssl-1.0.1e > --with-select_module --with-http_ssl_module --with-ipv6 > --add-module=../ngx-fancyindex > > Which succeeded. Finally I ran: `nmake -f objs/Makefile`, which gave > the following output: > > Microsoft (R) Program Maintenance Utility Version 12.00.21005.1 > Copyright (C) Microsoft Corporation. All rights reserved. > > NMAKE : fatal error U1073: don't know how to make 'src/os/win32/ngx_win32_config > .h' > Stop. > > How do I build nginx on Windows? It looks like you are trying to build from sources available in release tarballs. This won't work. Please follow the guide linked and clone the repository (or, alternatively, download a repository snapshot there). -- Maxim Dounin http://nginx.org/ From tfransosi at gmail.com Sat May 2 18:52:50 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Sat, 2 May 2015 15:52:50 -0300 Subject: Windows build failing - NMAKE : fatal error U1073 In-Reply-To: <20150502164806.GI32429@mdounin.ru> References: <20150502164806.GI32429@mdounin.ru> Message-ID: On Sat, May 2, 2015 at 1:48 PM, Maxim Dounin wrote: > Hello! > > On Sat, May 02, 2015 at 01:51:20PM +1000, Alec Taylor wrote: > >> Following your guide: http://nginx.org/en/docs/howto_build_on_win32.html >> >> With a few edits, I then ran: >> ./configure --with-cc=cl --builddir=objs --prefix= >> --conf-path=conf/nginx.conf --pid-path=logs/nginx.pid >> --http-log-path=logs/access.log --error-log-path=logs/error.log >> --sbin-path=nginx.exe >> --http-client-body-temp-path=temp/client_body_temp >> --http-proxy-temp-path=temp/proxy_temp >> --http-fastcgi-temp-path=temp/fastcgi_temp >> --with-cc-opt=-DFD_SETSIZE=1024 --with-pcre=objs/lib/pcre-8.36 >> --with-zlib=objs/lib/zlib-1.2.8 --with-openssl=objs/lib/openssl-1.0.1e >> --with-select_module --with-http_ssl_module --with-ipv6 >> --add-module=../ngx-fancyindex >> >> Which succeeded. Finally I ran: `nmake -f objs/Makefile`, which gave >> the following output: >> >> Microsoft (R) Program Maintenance Utility Version 12.00.21005.1 >> Copyright (C) Microsoft Corporation. All rights reserved. >> >> NMAKE : fatal error U1073: don't know how to make 'src/os/win32/ngx_win32_config >> .h' >> Stop. >> >> How do I build nginx on Windows? > > It looks like you are trying to build from sources available in > release tarballs. This won't work. This does not seem logical. Why it should not work? It is common to build and install from release tarballs (at least on Unix). Why it should be different on Windows? -- Thiago Farina From shahzaib.cb at gmail.com Sat May 2 22:11:55 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 3 May 2015 03:11:55 +0500 Subject: Nginx gets halt on 15K connections !! In-Reply-To: References: Message-ID: Experts, Could you please do me a favor in order to solve this problem ? Regards. Shahzaib On Sat, May 2, 2015 at 3:32 PM, shahzaib shahzaib wrote: > Hi, > > We've been running nginx-1.8 instance on one of our media server to > serve big static .mp4 files as well as small files such as .jpeg. Nginx is > serving well under 13K connections/sec with 800Mbps outgoing network load > but whenever requests exceed 15K connections, nginx gets halt and 'D' > status goes all over around the nginx workers, as well as network load > drops down to 400Mbps due to which video streaming gets stuck and after > 5-10 minutes load starts dropping and nginx starts stabilizing again as > well as network load gets back to 800Mbps. We've been encountering this > fluctuating situation on each 15minutes gap (Probably). > > We know that 'D' status is most likely due to high Disk I/O and to ensure > that the disk i/o could be the problem under 15K connections, we had > enabled apache on port 8080 for testing same video stream during high load > and buffered on apache, well the stream was fluctuating a bit but there was > no stuck for around 5-10 minutes. In the meantime the same video was worst > on nginx and stucked for 5minutes during buffer. > > We suspecting this to be related to something else than Disk I/O, reason > is the same video under high load buffers better on apache(on port 8080). > Also if it is related to high disk I/O, there must be no possibility that > video should should stuck for 5-10 minutes. > > It looks to us that nginx gets halt when concurrent connections exceed > 15K. We also tried optimizing backlog directive which slightly improved the > performance but there must be something more related to nginx optimization > which we must be missing. I have linked nginx.conf file, sysctl and vhost > file to get better understanding of our tweaks. > > user nginx; > worker_processes 48; > worker_rlimit_nofile 600000; #2 filehandlers for each connection > #error_log logs/error.log; > #error_log logs/error.log notice; > error_log /var/log/nginx/error.log error; > #error_log /dev/null; > #pid logs/nginx.pid; > > > events { > worker_connections 2048; > use epoll; > # use kqueue; > } > http { > include mime.types; > default_type application/octet-stream; > # client_max_body_size 800M; > client_body_buffer_size 128K; > output_buffers 1 512k; > sendfile_max_chunk 128k; > client_header_buffer_size 256k; > large_client_header_buffers 4 256k; > # fastcgi_buffers 512 8k; > # proxy_buffers 512 8k; > # fastcgi_read_timeout 300s; > server_tokens off; #Conceals nginx version > access_log off; > # access_log /var/log/nginx/access.log; > sendfile off; > # sendfile ; > tcp_nodelay on; > aio on; > directio 512; > # tcp_nopush on; > client_header_timeout 120s; > client_body_timeout 120s; > send_timeout 120s; > keepalive_timeout 15; > gzip on; > gzip_vary on; > gzip_disable "MSIE [1-6]\."; > gzip_proxied any; > gzip_http_version 1.0; > gzip_min_length 1280; > gzip_comp_level 6; > gzip_buffers 16 8k; > gzip_types text/plain text/xml text/css application/x-javascript > image/png image/x-icon image/gif image/jpeg image/jpg application/xml > application/xml+rss text/javascr ipt application/atom+xml; > include /usr/local/nginx/conf/vhosts/*.conf; > # open_file_cache max=2000 inactive=20s; > # open_file_cache_valid 60s; > # open_file_cache_min_uses 5; > # open_file_cache_errors off; > > } > > sysctl.conf main config : > > fs.file-max = 700000 > net.core.wmem_max=6291456 > net.core.rmem_max=6291456 > net.ipv4.tcp_rmem= 10240 87380 6291456 > net.ipv4.tcp_wmem= 10240 87380 6291456 > net.ipv4.tcp_window_scaling = 1 > net.ipv4.tcp_timestamps = 1 > net.ipv4.tcp_sack = 1 > net.ipv4.tcp_no_metrics_save = 1 > net.core.netdev_max_backlog = 10000 > > net.ipv6.conf.all.disable_ipv6 = 1 > net.ipv6.conf.default.disable_ipv6 = 1 > net.ipv6.conf.lo.disable_ipv6 = 1 > net.ipv6.conf.eth0.disable_ipv6 = 1 > net.ipv6.conf.eth1.disable_ipv6 = 1 > net.ipv6.conf.ppp0.disable_ipv6 = 1 > net.ipv6.conf.tun0.disable_ipv6 = 1 > vm.dirty_background_ratio = 50 > vm.dirty_ratio = 80 > net.ipv4.tcp_fin_timeout = 30 > net.ipv4.ip_local_port_range=1024 65000 > net.ipv4.tcp_tw_reuse = 1 > net.netfilter.nf_conntrack_tcp_timeout_established = 54000 > net.ipv4.netfilter.ip_conntrack_generic_timeout = 120 > net.ipv4.tcp_syn_retries=2 > net.ipv4.tcp_synack_retries=2 > net.ipv4.netfilter.ip_conntrack_max = 90536 > net.core.somaxconn = 10000 > > Vhost : > > server { > listen 80 backlog=10000; > server_name archive3.domain.com archive3.domain.com > www.archive3.domain.com www.archive3.domain.com; > access_log off; > location / { > root /content/archive; > index index.html index.htm index.php; > autoindex off; > } > > location /files/thumbs/ { > root /data/nginx/archive; > add_header X-Cache SSD; > expires max; > } > > location ~ \.(flv)$ { > flv; > root /content/archive; > # aio on; > # directio 512; > # output_buffers 1 2m; > expires 7d; > valid_referers none blocked domain.com *.domain.com *. > facebook.com *.domain.com *.twitter.com *.domain.com *.gear3rd.net > domain.com *.domain.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; > if ($invalid_referer) { > return 403; > } > } > > > location ~ \.(mp4)$ { > mp4; > mp4_buffer_size 4M; > mp4_max_buffer_size 10M; > expires 7d; > root /content/archive; > valid_referers none blocked domain.com *.domain.com *. > facebook.com *.domain.com *.twitter.com *.domain.com *.gear3rd.net > domain.com *.domain.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; > if ($invalid_referer) { > return 403; > } > } > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > location ~ \.php$ { > root /content/archive; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_read_timeout 10000; > } > > location ~ /\.ht { > deny all; > } > > > location ~ ^/(status|ping)$ { > access_log off; > allow 127.0.0.1; > > deny all; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > fastcgi_pass 127.0.0.1:9000; > } > } > > Server Specs : > > L5630 (8cores, 16threads) > RAM 64GB > 12 x 3TB @ SATA Hardware Raid-6 > > Here's the screenshot of server load during 15K connections: > > http://prntscr.com/70l68q > > Regards. > Shahzaib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Sun May 3 11:54:07 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 May 2015 14:54:07 +0300 Subject: Windows build failing - NMAKE : fatal error U1073 In-Reply-To: References: <20150502164806.GI32429@mdounin.ru> Message-ID: <20150503115407.GJ32429@mdounin.ru> Hello! On Sat, May 02, 2015 at 03:52:50PM -0300, Thiago Farina wrote: > On Sat, May 2, 2015 at 1:48 PM, Maxim Dounin wrote: > > Hello! > > > > On Sat, May 02, 2015 at 01:51:20PM +1000, Alec Taylor wrote: > > > >> Following your guide: http://nginx.org/en/docs/howto_build_on_win32.html > >> > >> With a few edits, I then ran: > >> ./configure --with-cc=cl --builddir=objs --prefix= > >> --conf-path=conf/nginx.conf --pid-path=logs/nginx.pid > >> --http-log-path=logs/access.log --error-log-path=logs/error.log > >> --sbin-path=nginx.exe > >> --http-client-body-temp-path=temp/client_body_temp > >> --http-proxy-temp-path=temp/proxy_temp > >> --http-fastcgi-temp-path=temp/fastcgi_temp > >> --with-cc-opt=-DFD_SETSIZE=1024 --with-pcre=objs/lib/pcre-8.36 > >> --with-zlib=objs/lib/zlib-1.2.8 --with-openssl=objs/lib/openssl-1.0.1e > >> --with-select_module --with-http_ssl_module --with-ipv6 > >> --add-module=../ngx-fancyindex > >> > >> Which succeeded. Finally I ran: `nmake -f objs/Makefile`, which gave > >> the following output: > >> > >> Microsoft (R) Program Maintenance Utility Version 12.00.21005.1 > >> Copyright (C) Microsoft Corporation. All rights reserved. > >> > >> NMAKE : fatal error U1073: don't know how to make 'src/os/win32/ngx_win32_config > >> .h' > >> Stop. > >> > >> How do I build nginx on Windows? > > > > It looks like you are trying to build from sources available in > > release tarballs. This won't work. > This does not seem logical. Why it should not work? > > It is common to build and install from release tarballs (at least on > Unix). Why it should be different on Windows? Historically, source tarballs does not contain some source files needed to build nginx on Windows. Note that nginx/Windows is still considered to be "beta" and the build process is cumbersome. See here for more details: http://nginx.org/en/docs/windows.html http://nginx.org/en/docs/howto_build_on_win32.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun May 3 12:18:33 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 3 May 2015 15:18:33 +0300 Subject: error logging with nginx and uWSGI In-Reply-To: References: Message-ID: <20150503121833.GL32429@mdounin.ru> Hello! On Fri, May 01, 2015 at 12:04:51PM -0400, Larry Martell wrote: > Prior to now, all the django projects I've worked on have used apache > and WSGI. With those, when an error occurred I went to > /var/log/httpd/error_log and details of the error were clearly there. > > Now for the first time I am working on a project using nginx and > uWSGI. Here, the /var/log/nginx/error_log is always empty. And the > uWSGI logs have some messages, but nothing about any errors. In this > setup where would I go to find the errors that > /var/log/httpd/error_log logs? Is there some config setting that is > suppressing the nginx errors? The uwsgi protocol doesn't include an error stream from an application to nginx. That is, if you are looking for errors generated by your application, you should look into uWSGI logs. Own nginx logs can be controlled using the error_log directive, see http://nginx.org/r/error_log. But I suspect it's not what are you looking for, see above. -- Maxim Dounin http://nginx.org/ From paulnpace at gmail.com Sun May 3 16:56:09 2015 From: paulnpace at gmail.com (Paul N. Pace) Date: Sun, 3 May 2015 09:56:09 -0700 Subject: error running shared postrotate script Message-ID: Ever since upgrading to 1.8.0 I get the following report from Cron: /etc/cron.daily/logrotate: error: error running shared postrotate script for '/var/log/nginx/*.log ' error: error running shared postrotate script for '/var/ www.example.com/logs/*.log ' run-parts: /etc/cron.daily/logrotate exited with return code 1 Contents of /etc/logrotate.d/nginx: /var/log/nginx/*.log { weekly missingok rotate 52 compress delaycompress notifempty create 0640 www-data adm sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then \ run-parts /etc/logrotate.d/httpd-prerotate; \ fi; \ endscript postrotate invoke-rc.d nginx rotate >/dev/null 2>&1 endscript } /var/www/example.com/logs/*.log { daily missingok rotate 36500 compress delaycompress notifempty create 0640 www-data adm sharedscripts prerotate if [ -d /etc/logrotate.d/httpd-prerotate ]; then \ run-parts /etc/logrotate.d/httpd-prerotate; \ fi; \ endscript postrotate invoke-rc.d nginx rotate >/dev/null 2>&1 endscript } Their are numerous .../example.com/... directories in the config file, but I have had this configuration for ages, and the update to 1.8.0 did not attempt to make any changes to this file. There is a bug report (dated 2015-05-01) at Launchpad that appears identical to my issue: https://bugs.launchpad.net/nginx/+bug/1450770 Are there any workarounds or configuration changes to correct this issue? Thanks! Paul System configuration: Ubuntu 12.0.4.5 LTS (GNU/Linux 3.2.0-82-virtual x86_64) Nginx installed from PPA https://launchpad.net/~nginx/+archive/ubuntu/stable # nginx -V built with OpenSSL 1.0.1 14 Mar 2012 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector --param=ssp-buffer-size=4 -Wformat -Wformat-security -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-Bsymbolic-functions -fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-auth-pam --add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-dav-ext-module --add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-echo --add-module=/build/buildd/nginx-1.8.0/debian/modules/nginx-upstream-fair --add-module=/build/buildd/nginx-1.8.0/debian/modules/ngx_http_substitutions_filter_module -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun May 3 18:12:16 2015 From: nginx-forum at nginx.us (goldfluss) Date: Sun, 03 May 2015 14:12:16 -0400 Subject: Module : showing content of the folder Message-ID: Hello, I'm trying to develop a small module for nginx that verify the url. If it's not valid, I return NGX_HTTP_FORBIDDEN in the handler and that works fine. But if url is correct i want to send the content of the location to the client. I tested to return different values, but it doesn't work (The module is activated in the same folder as the content I want to show, so I can't use ngx_http_internal_redirect) What I want to know, is if I need to configure the header / output myself or if i have to set a variable in ngx_http_request_t to 0? Thanks for your help, Goldfluss Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258587,258587#msg-258587 From nginx-forum at nginx.us Mon May 4 00:32:37 2015 From: nginx-forum at nginx.us (gariac) Date: Sun, 03 May 2015 20:32:37 -0400 Subject: Installed nginx with iredmail; how to add web content & test without DNS change In-Reply-To: <20150502075402.GJ29618@daoine.org> References: <20150502075402.GJ29618@daoine.org> Message-ID: Thanks halozen and Francis. Knowing where to read in the manual is half the battle. I think I will tackle combining the iredmail code into my own website since at the moment I only have one domain I am going to put on this server. With flexibility comes head scratching. It seems like nginx has two ways of doing everything. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258551,258588#msg-258588 From dewanggaba at xtremenitro.org Mon May 4 04:43:10 2015 From: dewanggaba at xtremenitro.org (Dewangga) Date: Mon, 04 May 2015 11:43:10 +0700 Subject: Deny referrer using map directive Message-ID: <5546F8DE.50403@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello! I have a map directive like this : map $http_referer $badboys { hostnames; default 0; "~*hitleap.com" 1; } and already defined on server block like this : server { .. skip .. if ($badboys) { return 406; } .. skip .. } but, if I tried to access them using given referral, still got HTTP 200. $ curl -I https://domain.name -L -e hitleap.com | grep 200 % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 0 0 0 0 0 0 0 0 --:--:-- --:--:-- - --:--:-- 0 HTTP/1.1 200 OK Is there any additional configuration needed? Any help will be appreciated :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJVRvjdAAoJEF1+odKB6YIxqr0IALSocMLPk584ZuGO82IQ8gw6 //GBuFr1nF15ov9fk3wgcae4S9p4InVWGyQ4y6tvgmrHaaiiZQUg9I99E+P9t7x/ cgdobsy7pg0UZGRsEZSVY5EZhELLyucCZ9p+p0gD/m78JeSHvRFSgPze3jfK5xtv DsGxu0j8Lk/W7lVqO48mVQTsbsv8mIxGPq5YrReNjXNaRW6XrsW78r8KQH4doTp4 +h3Q0ZfHcl3U28+0I+lmWEAga7/2m9cpRMqoqforvvdOHw/CQStCnPMhLa6ASS8s kXFqa8xkwjfdoLigGBWjd8hQnHjjBOVdhBUMTTu3i+tAU29H6lJgsipIXe4DWgo= =pYP1 -----END PGP SIGNATURE----- From francis at daoine.org Mon May 4 08:10:45 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 4 May 2015 09:10:45 +0100 Subject: Connection timeout from work, working anywhere else In-Reply-To: References: <1201d9f2aac5d213dd3547a58e1f3617.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150504081045.GM29618@daoine.org> On Sat, May 02, 2015 at 10:52:31AM -0400, GuiPoM wrote: Hi there, > I have no clue why it is not working. So I took the computer from my office > location and I did the test from another location, with a standard internet > access: I have no issue at all. > > It means that only the network between the computer and the nginx server > makes this error appears, it must have nothing to do with the computer. I also do not know why it fails. Perhaps gathering more information will help find the cause. When it fails: * what request is made? * what response is received? (application error, network timeout, something else) When it works, and the same request is made: * what response is received? If you can arrange that you can make the failing request on demand, then you will probably have a better chance of identifying the problem. ERR_TUNNEL_CONNECTION_FAILED can be associated with accessing https through a proxy server. Do you use a proxy server on the "failing" connection attempt? Does that proxy server have anything useful in its logs? > I am still hoping to find some help here. I am pretty sure that the "truc:4321", referrer: "https://truc:4321/jeedom/index.php?v=d&p=history"> > is causing this issue, but I have no idea if this is a configuration of the > nginx server and if I can do something around that. If it matters: what is the configuration of the nginx server? f -- Francis Daly francis at daoine.org From francis at daoine.org Mon May 4 08:22:34 2015 From: francis at daoine.org (Francis Daly) Date: Mon, 4 May 2015 09:22:34 +0100 Subject: Deny referrer using map directive In-Reply-To: <5546F8DE.50403@xtremenitro.org> References: <5546F8DE.50403@xtremenitro.org> Message-ID: <20150504082234.GN29618@daoine.org> On Mon, May 04, 2015 at 11:43:10AM +0700, Dewangga wrote: Hi there, > map $http_referer $badboys { > hostnames; > default 0; > "~*hitleap.com" 1; > } For info: This should work as-is; but when using "hostnames", you probably don't need the regex match. Just ".hitleap.com" will do what you possibly want. (It is not the same: both will block a.hitleap.com; but only one will block ahitleap.com or hitleap.com.a.) > but, if I tried to access them using given referral, still got HTTP 200. > $ curl -I https://domain.name -L -e hitleap.com | grep 200 It works for me, using http: (because I don't have a test https: server to hand). What happens when you leave all of the "...skip..." parts empty? > Is there any additional configuration needed? Do your logs show that this request was handled in the server{} block that you think it was handled in? f -- Francis Daly francis at daoine.org From dewanggaba at xtremenitro.org Mon May 4 08:27:56 2015 From: dewanggaba at xtremenitro.org (Dewangga) Date: Mon, 04 May 2015 15:27:56 +0700 Subject: Deny referrer using map directive In-Reply-To: <20150504082234.GN29618@daoine.org> References: <5546F8DE.50403@xtremenitro.org> <20150504082234.GN29618@daoine.org> Message-ID: <55472D8C.1060900@xtremenitro.org> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hello! On 5/4/2015 15:22, Francis Daly wrote: > On Mon, May 04, 2015 at 11:43:10AM +0700, Dewangga wrote: > > Hi there, > >> map $http_referer $badboys { hostnames; default 0; >> "~*hitleap.com" 1; } > > For info: > > This should work as-is; but when using "hostnames", you probably > don't need the regex match. Just ".hitleap.com" will do what you > possibly want. (It is not the same: both will block a.hitleap.com; > but only one will block ahitleap.com or hitleap.com.a.) You do the trick, just using ".hitleap.com" and the regex matched. $ curl -IL https://www.domain.name -e www2.hitleap.com HTTP/1.1 406 Not Acceptable Server: MCM-WS Date: Mon, 04 May 2015 08:30:42 GMT Content-Type: text/html Content-Length: 172 Connection: keep-alive > >> but, if I tried to access them using given referral, still got >> HTTP 200. $ curl -I https://domain.name -L -e hitleap.com | grep >> 200 > > It works for me, using http: (because I don't have a test https: > server to hand). What happens when you leave all of the > "...skip..." parts empty? > >> Is there any additional configuration needed? > > Do your logs show that this request was handled in the server{} > block that you think it was handled in? > > f > Thanks in a bunch Francis :) -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.17 (MingW32) iQEcBAEBAgAGBQJVRy2MAAoJEF1+odKB6YIx5zoH/RlUa3u2CIZHTVYYZuQQomEw s0Ul7D35GNmMWCon2wJDM0fKQKllSWLt6ed/G3UQuVCof3sNd9S8o7cuvsNpSpW5 Vds+lKIRDK6JsNxrjWONoPKWL9iEkIjItwF2VWUHTXhFPBoNEvhD4IWabqhtj4CC ljaM6Tza8vOIWKBR7FTSwnSwKnXasax7mZwDP0/h+jca7k+KBN9fo2k59yCxZRjm iAsFfUQ4bCR9jbkE5tqOx+UI2/6QXYsl4I1tqFqUHggHA4t9Hkd5JvcmPIPocCQi I2ZHOVaU4k7KQfnQtsgnf3YttiOb35/je9085wSm1+uFAfodw3owQxl8eKGaBGs= =/yob -----END PGP SIGNATURE----- From nginx-forum at nginx.us Mon May 4 11:52:46 2015 From: nginx-forum at nginx.us (philipp) Date: Mon, 04 May 2015 07:52:46 -0400 Subject: sending 404 responses for epty objects. In-Reply-To: <20141211143011.GU45960@mdounin.ru> References: <20141211143011.GU45960@mdounin.ru> Message-ID: Hi Maxim, should this solution work? http://syshero.org/post/49594172838/avoid-caching-0-byte-files-on-nginx I have created a simple test setup like: map $upstream_http_content_length $flag_cache_empty { default 0; 0 1; } server { listen 127.0.0.1:80; server_name local; location /empty { return 200 ""; } location /full { return 200 "full"; } } server { listen 127.0.0.1:80; server_name cache; location / { proxy_pass http://127.0.0.1; proxy_cache_valid 200 404 1h; proxy_no_cache $flag_cache_empty; proxy_cache_bypass $flag_cache_empty; proxy_set_header Host local; add_header X-Cache-Status $upstream_cache_status; add_header X-Cache-Empty $flag_cache_empty; add_header X-Upstream-Content-Length $upstream_http_content_length; } } But the flag is always 0: vagrant at nginx-16-centos-64 bin]$ curl -v -H "Host: cache" http://localhost/empty * About to connect() to localhost port 80 (#0) * Trying 127.0.0.1... connected * Connected to localhost (127.0.0.1) port 80 (#0) > GET /empty HTTP/1.1 > User-Agent: curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.15.3 zlib/1.2.3 libidn/1.18 libssh2/1.4.2 > Accept: */* > Host: cache > < HTTP/1.1 200 OK < Server: nginx < Date: Mon, 04 May 2015 11:37:51 GMT < Content-Type: application/octet-stream < Content-Length: 0 < Connection: keep-alive < X-Cache-Status: MISS < X-Cache-Empty: 0 < X-Upstream-Content-Length: 0 < * Connection #0 to host localhost left intact * Closing connection #0 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255421,258603#msg-258603 From nginx-forum at nginx.us Mon May 4 12:09:38 2015 From: nginx-forum at nginx.us (philipp) Date: Mon, 04 May 2015 08:09:38 -0400 Subject: Feature Request proxy_cache_min_size and proxy_cache_max_size Message-ID: <4437674df22a0f98907e574aaa6a8c6b.NginxMailingListEnglish@forum.nginx.org> In order to solve this issue http://forum.nginx.org/read.php?2,255421,255438#msg-255438 two additional features would be cool: proxy_cache_min_size Syntax: proxy_cache_min_size number; Default: proxy_cache_min_size 1; Context: http, server, location Sets the minimal size in bytes for a response to be cached. proxy_cache_max_size Syntax: proxy_cache_max_size number; Default: proxy_cache_max_size ?; Context: http, server, location Sets the maximal size in bytes for a response to be cached. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258604,258604#msg-258604 From mdounin at mdounin.ru Mon May 4 14:27:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 4 May 2015 17:27:03 +0300 Subject: sending 404 responses for epty objects. In-Reply-To: References: <20141211143011.GU45960@mdounin.ru> Message-ID: <20150504142703.GR32429@mdounin.ru> Hello! On Mon, May 04, 2015 at 07:52:46AM -0400, philipp wrote: > Hi Maxim, > should this solution work? > http://syshero.org/post/49594172838/avoid-caching-0-byte-files-on-nginx > > I have created a simple test setup like: > > map $upstream_http_content_length $flag_cache_empty { > default 0; > 0 1; > } > > server { > listen 127.0.0.1:80; > > server_name local; > > location /empty { > return 200 ""; > } > location /full { > return 200 "full"; > } > } > > server { > listen 127.0.0.1:80; > > server_name cache; > > location / { > proxy_pass http://127.0.0.1; > proxy_cache_valid 200 404 1h; > proxy_no_cache $flag_cache_empty; > proxy_cache_bypass $flag_cache_empty; Removing proxy_cache_bypass should fix things for you. The problem is that proxy_cache_bypass will be evaluated before a request is sent to upstream and therefore before $upstream_http_content_length will be available. As a result $flag_cache_empty will be always 0. And, because map results are cached for entire request lifetime, proxy_no_cache will see the same value, 0. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon May 4 14:45:02 2015 From: nginx-forum at nginx.us (philipp) Date: Mon, 04 May 2015 10:45:02 -0400 Subject: sending 404 responses for epty objects. In-Reply-To: <20150504142703.GR32429@mdounin.ru> References: <20150504142703.GR32429@mdounin.ru> Message-ID: <1e3c7c4da2088aa369215d4c92733ad7.NginxMailingListEnglish@forum.nginx.org> Thanks for your help, removing the bypass solved this issue for me. This feature request would simplify such configurations: http://forum.nginx.org/read.php?2,258604 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255421,258617#msg-258617 From nginx-forum at nginx.us Mon May 4 18:19:45 2015 From: nginx-forum at nginx.us (GuiPoM) Date: Mon, 04 May 2015 14:19:45 -0400 Subject: Connection timeout from work, working anywhere else In-Reply-To: <20150504081045.GM29618@daoine.org> References: <20150504081045.GM29618@daoine.org> Message-ID: Thank you for your answer. I can reproduce on demand ! BUTI am not familiar with nginx. Could you give me some hints what to activate in order to provide useful information for debugging ? I must connect through a proxy server but I will never be able to get the logs from there. I can get some logs about request arriving to nginx and leaving it (but I already posted the only logs I am aware of). Next follows the config. Hope it will help ! [code] user www-data; worker_processes 4; pid /var/run/nginx.pid; events { worker_connections 768; # multi_accept on; } http { ## # Basic Settings ## sendfile on; tcp_nopush off; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; # server_tokens off; # server_names_hash_bucket_size 64; # server_name_in_redirect off; include /etc/nginx/mime.types; default_type application/octet-stream; ## # Logging Settings ## access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; ## # Gzip Settings ## gzip on; gzip_disable "msie6"; # gzip_vary on; # gzip_proxied any; # gzip_comp_level 6; # gzip_buffers 16 8k; # gzip_http_version 1.1; # gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript; ## # nginx-naxsi config ## # Uncomment it if you installed nginx-naxsi ## #include /etc/nginx/naxsi_core.rules; ## # nginx-passenger config ## # Uncomment it if you installed nginx-passenger ## #passenger_root /usr; #passenger_ruby /usr/bin/ruby; ## # Virtual Host Configs ## include /etc/nginx/conf.d/*.conf; include /etc/nginx/sites-enabled/*; } #mail { # # See sample authentication script at: # # http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript # # # auth_http localhost/auth.php; # # pop3_capabilities "TOP" "USER"; # # imap_capabilities "IMAP4rev1" "UIDPLUS"; # # server { # listen localhost:110; # protocol pop3; # proxy on; # } # # server { # listen localhost:143; # protocol imap; # proxy on; # } #} [/code] Thanks. GuiPoM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258621#msg-258621 From nginx-forum at nginx.us Tue May 5 07:00:53 2015 From: nginx-forum at nginx.us (omverma) Date: Tue, 05 May 2015 03:00:53 -0400 Subject: %20 need to replace with - Message-ID: <8077cdf97964e4b4058f27d66bfde272.NginxMailingListEnglish@forum.nginx.org> Dear All, My website is being hit by some urls that is not served properly by nginx due to some characters like %20. http://www.abc.com?param=test (Works fine) http://www.abc.com?param=test%20again (Do no work) it should be something like this http://www.abc.com?param=test-again I want to replace %20 with - (Dash). Please suggest me best regex for the same. Thanks, Om Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258629,258629#msg-258629 From nginx-forum at nginx.us Tue May 5 07:03:00 2015 From: nginx-forum at nginx.us (ikoulamathieu) Date: Tue, 05 May 2015 03:03:00 -0400 Subject: [alert] 6781#0: worker process xxxx exited on signal 11 Message-ID: <44c695378bd3def7a25bfbe47a16dd65.NginxMailingListEnglish@forum.nginx.org> Hi, I'm sorry my bad english We are put in place a proxy web server. It's functionnal with 8 vhosts with severals certificats SSL autosignated. It was functionnal. We bought a true certificat SSL multidomain to Geotrust and we installed this certificats on vhosts. Since to install this certifcat, on Internet Explorer broswer, we received an error. In the log, /var/log/nginx/error.log 2015/05/05 07:48:07 [alert] 18687#0: worker process 32496 exited on signal 11 2015/05/05 07:48:55 [alert] 6781#0: worker process 6782 exited on signal 11 2015/05/05 07:49:24 [alert] 6781#0: worker process 6783 exited on signal 11 dmesg [6186233.729153] nginx[6834]: segfault at 4 ip 08071076 sp bf8dc180 error 4 in nginx[8048000+b9000] [6186236.938752] nginx[6845]: segfault at 4 ip 08071076 sp bf8dc180 error 4 in nginx[8048000+b9000] [6186264.888254] nginx[6846]: segfault at 4 ip 08071076 sp bf8dc150 error 4 in nginx[8048000+b9000] My confi Server : Linux poivron 3.2.0-4-686-pae #1 SMP Debian 3.2.63-2 i686 GNU/Linux nginx version: nginx/1.2.1 paquets installed ii nginx 1.2.1-2.2+wheezy3 all small, powerful, scalable web/proxy server ii nginx-common 1.2.1-2.2+wheezy3 all small, powerful, scalable web/proxy server - common files ii nginx-full 1.2.1-2.2+wheezy3 i386 nginx web/proxy server (standard version) I don't know to resolve this issue. I don't know if this issue is only on the browser Internet Explorer. I'm tested with chrome and Firefox.I'm not error with these broswers. Can you please help to me for this issue ? Thanks Regards, ikoulamathieu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258631,258631#msg-258631 From nginx-forum at nginx.us Tue May 5 07:29:38 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 05 May 2015 03:29:38 -0400 Subject: PCRE with NGINX Message-ID: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> Hi, I am trying to insert a PCRE inside a map block. But looks like "{" used in my code is not liked. Suppose I am doing something like this inside a "map" block. .{1}(?P.{9}).* What is the escape character for "{ " Please let me know. Thanks.... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258636,258636#msg-258636 From nginx-forum at nginx.us Tue May 5 07:42:45 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 05 May 2015 03:42:45 -0400 Subject: PCRE with NGINX In-Reply-To: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> References: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <05bc6d3c927f77cba051cc0ae7d44d77.NginxMailingListEnglish@forum.nginx.org> \{ Would do it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258636,258637#msg-258637 From nginx-forum at nginx.us Tue May 5 07:49:20 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 05 May 2015 03:49:20 -0400 Subject: PCRE with NGINX In-Reply-To: <05bc6d3c927f77cba051cc0ae7d44d77.NginxMailingListEnglish@forum.nginx.org> References: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> <05bc6d3c927f77cba051cc0ae7d44d77.NginxMailingListEnglish@forum.nginx.org> Message-ID: <871ac6303005e0363a46d2043d49cbba.NginxMailingListEnglish@forum.nginx.org> I tried that alrwady . I don't get errors. But, the functionality does not work. So, I am not sure if the PCRE is not liking it or something else...Still debugging. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258636,258638#msg-258638 From nginx-forum at nginx.us Tue May 5 08:24:54 2015 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 05 May 2015 04:24:54 -0400 Subject: PCRE with NGINX In-Reply-To: <871ac6303005e0363a46d2043d49cbba.NginxMailingListEnglish@forum.nginx.org> References: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> <05bc6d3c927f77cba051cc0ae7d44d77.NginxMailingListEnglish@forum.nginx.org> <871ac6303005e0363a46d2043d49cbba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <464c0abd867ac2d7527c5ec4f5e258de.NginxMailingListEnglish@forum.nginx.org> This works: ~*\{.*\:\; 1; with map, do note that a 400 in logging means something else which precedes map handling. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258636,258644#msg-258644 From nginx-forum at nginx.us Tue May 5 08:46:32 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 05 May 2015 04:46:32 -0400 Subject: PCRE with NGINX In-Reply-To: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> References: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7f9e696c0b560241b9c860100c06e66e.NginxMailingListEnglish@forum.nginx.org> Thanks for the explanation. Sorry, not able to understand you fully. My PCRE is something like this (have tested it through https://regex101.com/) .+\..{3}(?P.{6})[^\.]*$ Now, to achieve this what should I put in the nginx.conf Please let me know. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258636,258646#msg-258646 From arut at nginx.com Tue May 5 08:48:53 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 5 May 2015 11:48:53 +0300 Subject: Nginx does not delete old cached files In-Reply-To: <62ed8c24f41cf5fa3c32fe96e343e5d7.NginxMailingListEnglish@forum.nginx.org> References: <62ed8c24f41cf5fa3c32fe96e343e5d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, 1. Please make sure the cache manager process does not delete the files. Attach with strace and make sure there are no file deleting syscalls. It?s still possible that cache files appear too fast and nginx cache manager just can?t delete the files at the same rate. 2. Is it possible for you to provide a debug log? http://nginx.org/en/docs/debugging_log.html Debug log is needed since nginx start until the lines starting with "http file cache forced expire: #1? appear in it. On 27 Apr 2015, at 11:53, atrus wrote: > Hi, > > I have nginx serve as image cached, here is the main config : > > proxy_cache_path /etc/nginx/cache-media levels=1:2 keys_zone=media:1000m > inactive=2y max_size=100g; > proxy_temp_path /etc/nginx/cache-media/tmp; > > /dev/sdc1 is an intel SSD with ext4 mounted (-o noatime, nodiratime). > > It looks like that the nginx do not evict the old cached files : > > Disk usage on /dev/sdc1 : > > # df -h /dev/sdc1 > Filesystem Size Used Avail Use% Mounted on > /dev/sdc1 110G 102G 7.5G 94% /etc/nginx/cache-media > > the max_size=100g but the real size has been raise up to 102GB. > > Sometime it full up to 100% of the sdc1 disk and get errror : 2015/04/27 > 12:03:55 [crit] 7708#0: *18862126 pwrite() > "/etc/nginx/cache-media/tmp/0004826203" failed (28: No space left on device) > while reading upstream, > > and I need to manually remove by find && rm -rf > > My nginx -V : > > ginx version: nginx/1.7.10 > built by gcc 4.4.7 20120313 (Red Hat 4.4.7-3) (GCC) > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log > --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/var/cache/nginx/client_temp > --http-proxy-temp-path=/var/cache/nginx/proxy_temp > --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp > --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp > --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx > --with-http_ssl_module --with-http_realip_module --with-http_addition_module > --with-http_sub_module --with-http_dav_module --with-http_flv_module > --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_auth_request_module --with-mail --with-mail_ssl_module > --with-file-aio --with-ipv6 --with-http_spdy_module --with-cc-opt='-O2 -g > -pipe -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector > --param=ssp-buffer-size=4 -m64 -mtune=generic' > > Plz tell me what could be the problem ? > > Thank you. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258396,258396#msg-258396 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Roman Arutyunyan From nginx-forum at nginx.us Tue May 5 08:50:28 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Tue, 05 May 2015 04:50:28 -0400 Subject: PCRE with NGINX In-Reply-To: <7f9e696c0b560241b9c860100c06e66e.NginxMailingListEnglish@forum.nginx.org> References: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> <7f9e696c0b560241b9c860100c06e66e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <11221728b983768be7be1694f23bb687.NginxMailingListEnglish@forum.nginx.org> I looked @ http://nginx.org/en/docs/http/server_names.html It sames put it inside " " and once I did that it is working Thank you. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258636,258649#msg-258649 From nginx-forum at nginx.us Tue May 5 10:01:26 2015 From: nginx-forum at nginx.us (vincent123456) Date: Tue, 05 May 2015 06:01:26 -0400 Subject: FastCGI sent in stderr: "Primary script unknown" Message-ID: <9c288334a690d84938520108d7311843.NginxMailingListEnglish@forum.nginx.org> Hi, I try to configure a vhost with Nginx and PHP-FPM. I have an application with Symfony2.6, i followed this tutorial : http://symfony.com/doc/current/cookbook/configuration/web_server_configuration.html#nginx I have this error : 2015/05/05 11:48:32 [error] 5181#0: *5 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: myserver.local, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "myserver.local" I followed various tutorials on this problem (google is my friend !!) but it does not work... My configuration : -> Vhost server { server_name myserver.local; root /datas/www/sf_project/web; location / { try_files $uri /app.php$is_args$args; } # DEV location ~ ^/(app_dev|config)\.php(/|$) { fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; } # PROD location ~ ^/app\.php(/|$) { fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param HTTPS off; internal; } error_log /var/log/nginx/error.log; access_log /var/log/nginx/access.log; } -> Nginx.conf user myuser; -> php-fpm conf : user = myuser; group = myuser; -> ls -al /datas/www drwxr-xr-x. 8 myuser myuser 4096 5 mai 11:36 sf_project -> Permission : /datas/www/sf_project => 755 /datas/www/sf_project/web/app.php => 644 -> OS / Conf : Fedora 21 / Nginx 1.6.3 / PHP 5.6.8 Thx for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258653,258653#msg-258653 From mdounin at mdounin.ru Tue May 5 11:52:34 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 May 2015 14:52:34 +0300 Subject: [alert] 6781#0: worker process xxxx exited on signal 11 In-Reply-To: <44c695378bd3def7a25bfbe47a16dd65.NginxMailingListEnglish@forum.nginx.org> References: <44c695378bd3def7a25bfbe47a16dd65.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150505115233.GB98215@mdounin.ru> Hello! On Tue, May 05, 2015 at 03:03:00AM -0400, ikoulamathieu wrote: > Hi, > > I'm sorry my bad english > > We are put in place a proxy web server. It's functionnal with 8 vhosts with > severals certificats SSL autosignated. It was functionnal. We bought a true > certificat SSL multidomain to Geotrust and we installed this certificats on > vhosts. > Since to install this certifcat, on Internet Explorer broswer, we received > an error. In the log, > > /var/log/nginx/error.log > > > 2015/05/05 07:48:07 [alert] 18687#0: worker process 32496 exited on signal > 11 > 2015/05/05 07:48:55 [alert] 6781#0: worker process 6782 exited on signal 11 > 2015/05/05 07:49:24 [alert] 6781#0: worker process 6783 exited on signal 11 This sounds similar to this ticket: http://trac.nginx.org/nginx/ticket/235 If it indeed the case, try the workaround as suggested in the first comment. If it doesn't help, please see this page for some basic debugging hints: http://wiki.nginx.org/Debugging -- Maxim Dounin http://nginx.org/ From larry.martell at gmail.com Tue May 5 12:08:42 2015 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 5 May 2015 08:08:42 -0400 Subject: error logging with nginx and uWSGI In-Reply-To: <20150503121833.GL32429@mdounin.ru> References: <20150503121833.GL32429@mdounin.ru> Message-ID: On Sun, May 3, 2015 at 8:18 AM, Maxim Dounin wrote: > Hello! > > On Fri, May 01, 2015 at 12:04:51PM -0400, Larry Martell wrote: > >> Prior to now, all the django projects I've worked on have used apache >> and WSGI. With those, when an error occurred I went to >> /var/log/httpd/error_log and details of the error were clearly there. >> >> Now for the first time I am working on a project using nginx and >> uWSGI. Here, the /var/log/nginx/error_log is always empty. And the >> uWSGI logs have some messages, but nothing about any errors. In this >> setup where would I go to find the errors that >> /var/log/httpd/error_log logs? Is there some config setting that is >> suppressing the nginx errors? > > The uwsgi protocol doesn't include an error stream from an > application to nginx. That is, if you are looking for errors > generated by your application, you should look into uWSGI logs. > > Own nginx logs can be controlled using the error_log directive, > see http://nginx.org/r/error_log. But I suspect it's not what are > you looking for, see above. When my app has, for example, a syntax error, then yes, that appears in the uWSGI log. But what I was talking about are the HTTP errors like a 500 or a 400. When I get those there's nothing in the logs. From nginx-forum at nginx.us Tue May 5 12:20:35 2015 From: nginx-forum at nginx.us (meteor8488) Date: Tue, 05 May 2015 08:20:35 -0400 Subject: how to separate robot access log and human access log In-Reply-To: <20150430174453.GH29618@daoine.org> References: <20150430174453.GH29618@daoine.org> Message-ID: thanks for your reply. I know that I can use if to enable conditional logging. But what I want to do is if $spiderbot=0, then log to location_access.log if $spiderbot=1, then log to spider_access.log. And I don't want the same logs write to different files. How can I do that? thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258417,258656#msg-258656 From francis at daoine.org Tue May 5 12:20:44 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 5 May 2015 13:20:44 +0100 Subject: PCRE with NGINX In-Reply-To: <11221728b983768be7be1694f23bb687.NginxMailingListEnglish@forum.nginx.org> References: <5faacb200fb4a6fde7e645a03884d79f.NginxMailingListEnglish@forum.nginx.org> <7f9e696c0b560241b9c860100c06e66e.NginxMailingListEnglish@forum.nginx.org> <11221728b983768be7be1694f23bb687.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150505122044.GP29618@daoine.org> On Tue, May 05, 2015 at 04:50:28AM -0400, nginxsantos wrote: Hi there, > I looked @ http://nginx.org/en/docs/http/server_names.html > It sames put it inside " " and once I did that it is working Yes - "wrap the regex in double quotes" is the right answer. This line does appear in the documentation for "rewrite" and for "if"; but not for "location" or "map". nginx documentation improvement suggestion: It may be worth adding it to those (and anywhere else that uses regexes); or to link from everywhere to a "regex in nginx" document. (Yes, I know "patches welcome", and I'm not providing any. I'm not sure whether "duplicate" or "link" is better.) Thanks, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue May 5 12:30:16 2015 From: nginx-forum at nginx.us (ikoulamathieu) Date: Tue, 05 May 2015 08:30:16 -0400 Subject: [alert] 6781#0: worker process xxxx exited on signal 11 In-Reply-To: <20150505115233.GB98215@mdounin.ru> References: <20150505115233.GB98215@mdounin.ru> Message-ID: <8861d88c0bd1804c72b9fe5c65b9241c.NginxMailingListEnglish@forum.nginx.org> Hi, thanks to your respond. In deed, I was found this article and this https://andreastan.com/fixing-nginx-random-segfaults-safari-cant-access/. In fact, i add a directive on all vhosts in the nginx configuration with ssl_session_cache shared:SSL:600m; I put this directive on five vhosts and it's correct on all browser and i don't the errors in the logs. Thanks to your help. Regards. Mathieu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258631,258658#msg-258658 From nginx-forum at nginx.us Tue May 5 12:38:16 2015 From: nginx-forum at nginx.us (meteor8488) Date: Tue, 05 May 2015 08:38:16 -0400 Subject: How to block fake google spider and fake web browser access? Message-ID: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> Hi All, Recently I found that someguys are trying to mirror my website. They are doing this in two ways: 1. Pretend to be google spiders . Access logs are as following: 89.85.93.235 - - [05/May/2015:20:23:16 +0800] "GET /robots.txt HTTP/1.0" 444 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "66.249.79.138" 79.85.93.235 - - [05/May/2015:20:23:34 +0800] "GET /robots.txt HTTP/1.0" 444 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)" "66.249.79.154" The http_x_forwarded_for address are google addresses. 2. Pretend to be a normal web browser. I'm trying to use below configuration to block their access: For 1 above, I'll check X_forward_for address. If user agent is spider, and X_forward_for is not null. Then block. I'm using map $http_x_forwarded_for $xf { default 1; "" 0; } map $http_user_agent $fakebots { default 0; "~*bot" $xf; "~*bing" $xf; "~*search" $xf; } if ($fakebots) { return 444; } With this configuration, it seems the fake google spider can't access the root of my website. But they can still access my php files, and they can't access and js or css files. Very strange. I don't know what's wrong. 2. For user-agent who declare they are not spiders. I'll use ngx_lua to generate a random value and add the value into cookie, and then check whether they can send this value back or not. If they can't send it back, then it means that they are robot and block access. map $http_user_agent $ifbot { default 0; "~*Yahoo" 1; "~*archive" 1; "~*search" 1; "~*Googlebot" 1; "~Mediapartners-Google" 1; "~*bingbot" 1; "~*msn" 1; "~*rogerbot" 3; "~*ChinasoSpider" 3; } if ($ifbot = "0") { set $humanfilter 1; } #below section is to exclude flash upload if ( $request_uri !~ "~mod\=swfupload\&action\=swfupload" ) { set $humanfilter "${humanfilter}1"; } if ($humanfilter = "11"){ rewrite_by_lua ' local random = ngx.var.cookie_random if(random == nil) then random = math.random(999999) end local token = ngx.md5("hello" .. ngx.var.remote_addr .. random) if (ngx.var.cookie_token ~= token) then ngx.header["Set-Cookie"] = {"token=" .. token, "random=" .. random} return ngx.redirect(ngx.var.scheme .. "://" .. ngx.var.host .. ngx.var.request_uri) end '; } But it seems that with above configuration, google bot is also blocked while it shouldn't. Any one can help? Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258659,258659#msg-258659 From nginx-forum at nginx.us Tue May 5 13:07:41 2015 From: nginx-forum at nginx.us (meteor8488) Date: Tue, 05 May 2015 09:07:41 -0400 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> References: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5e511134bb4675df4c8c760544c1a169.NginxMailingListEnglish@forum.nginx.org> It seems that I can't edit my post. I have to post my question here: I tried to use "deny" to deny access from an IP. But it seems that it can still access my server. In my http part: deny 69.85.92.0/23; deny 69.85.93.235; But when I check the log, I still can find 69.85.93.235 - - [05/May/2015:19:44:22 +0800] "GET /thread-1251687-1-1.html HTTP/1.0" 302 154 "http://www.example.com" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" "123.125.71.107" 69.85.93.235 - - [05/May/2015:19:50:06 +0800] "GET /thread-1072432-1-1.html HTTP/1.0" 302 154 "http://www.example.com" "Mozilla/5.0 (compatible; Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" "220.181.108.151" it seems deny is not working. Any one can help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258659,258660#msg-258660 From pchychi at gmail.com Tue May 5 13:19:52 2015 From: pchychi at gmail.com (Payam Chychi) Date: Tue, 5 May 2015 06:19:52 -0700 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> References: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7097A45CAE3849A681E4F90CA7F86CC4@gmail.com> Hey, Why not just compare their xforward vs connecting ip, if they dont match and its a bot, drop it. -- Payam Chychi Network Engineer / Security Specialist On Tuesday, May 5, 2015 at 5:38 AM, meteor8488 wrote: > Hi All, > > Recently I found that someguys are trying to mirror my website. They are > doing this in two ways: > > 1. Pretend to be google spiders . Access logs are as following: > > 89.85.93.235 - - [05/May/2015:20:23:16 +0800] "GET /robots.txt HTTP/1.0" 444 > 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" "66.249.79.138" > 79.85.93.235 - - [05/May/2015:20:23:34 +0800] "GET /robots.txt HTTP/1.0" 444 > 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" "66.249.79.154" > > The http_x_forwarded_for address are google addresses. > > 2. Pretend to be a normal web browser. > > > I'm trying to use below configuration to block their access: > > > > For 1 above, I'll check X_forward_for address. If user agent is spider, and > X_forward_for is not null. Then block. > I'm using > > map $http_x_forwarded_for $xf { > default 1; > "" 0; > } > map $http_user_agent $fakebots { > default 0; > "~*bot" $xf; > "~*bing" $xf; > "~*search" $xf; > } > if ($fakebots) { > return 444; > } > > With this configuration, it seems the fake google spider can't access the > root of my website. But they can still access my php files, and they can't > access and js or css files. Very strange. I don't know what's wrong. > > 2. For user-agent who declare they are not spiders. I'll use ngx_lua to > generate a random value and add the value into cookie, and then check > whether they can send this value back or not. If they can't send it back, > then it means that they are robot and block access. > > map $http_user_agent $ifbot { > default 0; > "~*Yahoo" 1; > "~*archive" 1; > "~*search" 1; > "~*Googlebot" 1; > "~Mediapartners-Google" 1; > "~*bingbot" 1; > "~*msn" 1; > "~*rogerbot" 3; > "~*ChinasoSpider" 3; > } > > if ($ifbot = "0") { > set $humanfilter 1; > } > #below section is to exclude flash upload > if ( $request_uri !~ "~mod\=swfupload\&action\=swfupload" ) { > set $humanfilter "${humanfilter}1"; > } > > if ($humanfilter = "11"){ > rewrite_by_lua ' > local random = ngx.var.cookie_random > if(random == nil) then > random = math.random(999999) > end > local token = ngx.md5("hello" .. ngx.var.remote_addr .. random) > if (ngx.var.cookie_token ~= token) then > ngx.header["Set-Cookie"] = {"token=" .. token, "random=" .. random} > return ngx.redirect(ngx.var.scheme .. "://" .. ngx.var.host .. > ngx.var.request_uri) > end > '; > } > But it seems that with above configuration, google bot is also blocked while > it shouldn't. > > > Any one can help? > > Thanks > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258659,258659#msg-258659 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 5 13:53:27 2015 From: nginx-forum at nginx.us (meteor8488) Date: Tue, 05 May 2015 09:53:27 -0400 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <7097A45CAE3849A681E4F90CA7F86CC4@gmail.com> References: <7097A45CAE3849A681E4F90CA7F86CC4@gmail.com> Message-ID: <0f39e14ac1e8af0f3fa6456a78977d9d.NginxMailingListEnglish@forum.nginx.org> Thanks for your suggestion. My thought is 1. Is it a robot? 2. If yes, then does't it have a X_forward_IP? 3. If yes, then deny. Your method is 1. Is it a robot? 2. If yes, then if x_forward_ip the same with realip? 3. If no, then deny. I think there is no big different... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258659,258663#msg-258663 From mdounin at mdounin.ru Tue May 5 14:02:03 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 5 May 2015 17:02:03 +0300 Subject: error logging with nginx and uWSGI In-Reply-To: References: <20150503121833.GL32429@mdounin.ru> Message-ID: <20150505140203.GC98215@mdounin.ru> Hello! On Tue, May 05, 2015 at 08:08:42AM -0400, Larry Martell wrote: > On Sun, May 3, 2015 at 8:18 AM, Maxim Dounin wrote: > > Hello! > > > > On Fri, May 01, 2015 at 12:04:51PM -0400, Larry Martell wrote: > > > >> Prior to now, all the django projects I've worked on have used apache > >> and WSGI. With those, when an error occurred I went to > >> /var/log/httpd/error_log and details of the error were clearly there. > >> > >> Now for the first time I am working on a project using nginx and > >> uWSGI. Here, the /var/log/nginx/error_log is always empty. And the > >> uWSGI logs have some messages, but nothing about any errors. In this > >> setup where would I go to find the errors that > >> /var/log/httpd/error_log logs? Is there some config setting that is > >> suppressing the nginx errors? > > > > The uwsgi protocol doesn't include an error stream from an > > application to nginx. That is, if you are looking for errors > > generated by your application, you should look into uWSGI logs. > > > > Own nginx logs can be controlled using the error_log directive, > > see http://nginx.org/r/error_log. But I suspect it's not what are > > you looking for, see above. > > When my app has, for example, a syntax error, then yes, that appears > in the uWSGI log. But what I was talking about are the HTTP errors > like a 500 or a 400. When I get those there's nothing in the logs. If an error is returned by your application and/or uWSGI, then its reasons are expected to be in your application logs (or the uWSGI logs). If an error is returned by nginx (e.g., because a client sent an invalid request and nginx returned 400), then reasons should be in nginx error log. Client-related errors, though, are usually logged at the "info" level, and won't be visible in error log by default, see http://nginx.org/r/error_log. Note well that nginx error logs are highly customizeable, and it is possible that you are looking into a wrong file. In particular, please note that default error log can be redefined during compilation (see "nginx -V" output to find out which one is used by default), and can also be redefined on a per-server or a per-location basis (check your configs to find out if it's the case). -- Maxim Dounin http://nginx.org/ From zxcvbn4038 at gmail.com Tue May 5 14:03:43 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Tue, 5 May 2015 10:03:43 -0400 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> References: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> Message-ID: The only way you can stop people from mirroring your site is to pull the plug. Anything you set up can be bypassed like a normal user would. If you put CAPTCHAs on every page, someone motivated can get really smart people in poor countries to type in the letters, click the blue box, complete the pattern, etc. on the cheap. However, that being said the legit Googlebot operates from a well defined subset of IP blocks and always identifies itself and honors robots.txt, so you can look those up and white list them. Any traffic from Amazon EC2, Google Clould, and Digital Ocean is immediately suspect, you can filter them out by IP block because they are probably not going to identify themselves as a bot. However you may lose traffic from real people running VPNs and proxies though those sites as a consequence so think it through before you act. And there are no shortage of other providers for people to turn to if you block the big clouds, so it comes back to pulling the plug if you want to keep your content locked down. On Tue, May 5, 2015 at 8:38 AM, meteor8488 wrote: > Hi All, > > Recently I found that someguys are trying to mirror my website. They are > doing this in two ways: > > 1. Pretend to be google spiders . Access logs are as following: > > 89.85.93.235 - - [05/May/2015:20:23:16 +0800] "GET /robots.txt HTTP/1.0" > 444 > 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" "66.249.79.138" > 79.85.93.235 - - [05/May/2015:20:23:34 +0800] "GET /robots.txt HTTP/1.0" > 444 > 0 "http://www.example.com" "Mozilla/5.0 (compatible; Googlebot/2.1; > +http://www.google.com/bot.html)" "66.249.79.154" > > The http_x_forwarded_for address are google addresses. > > 2. Pretend to be a normal web browser. > > > I'm trying to use below configuration to block their access: > > > > For 1 above, I'll check X_forward_for address. If user agent is spider, and > X_forward_for is not null. Then block. > I'm using > > map $http_x_forwarded_for $xf { > default 1; > "" 0; > } > map $http_user_agent $fakebots { > default 0; > "~*bot" $xf; > "~*bing" $xf; > "~*search" $xf; > } > if ($fakebots) { > return 444; > } > > With this configuration, it seems the fake google spider can't access the > root of my website. But they can still access my php files, and they can't > access and js or css files. Very strange. I don't know what's wrong. > > 2. For user-agent who declare they are not spiders. I'll use ngx_lua to > generate a random value and add the value into cookie, and then check > whether they can send this value back or not. If they can't send it back, > then it means that they are robot and block access. > > map $http_user_agent $ifbot { > default 0; > "~*Yahoo" 1; > "~*archive" 1; > "~*search" 1; > "~*Googlebot" 1; > "~Mediapartners-Google" 1; > "~*bingbot" 1; > "~*msn" 1; > "~*rogerbot" 3; > "~*ChinasoSpider" 3; > } > > if ($ifbot = "0") { > set $humanfilter 1; > } > #below section is to exclude flash upload > if ( $request_uri !~ "~mod\=swfupload\&action\=swfupload" ) { > set $humanfilter "${humanfilter}1"; > } > > if ($humanfilter = "11"){ > rewrite_by_lua ' > local random = ngx.var.cookie_random > if(random == nil) then > random = math.random(999999) > end > local token = ngx.md5("hello" .. ngx.var.remote_addr .. random) > if (ngx.var.cookie_token ~= token) then > ngx.header["Set-Cookie"] = {"token=" .. token, "random=" .. random} > return ngx.redirect(ngx.var.scheme .. "://" .. ngx.var.host .. > ngx.var.request_uri) > end > '; > } > But it seems that with above configuration, google bot is also blocked > while > it shouldn't. > > > Any one can help? > > Thanks > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258659,258659#msg-258659 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 5 14:54:09 2015 From: nginx-forum at nginx.us (zanadu2) Date: Tue, 05 May 2015 10:54:09 -0400 Subject: Websockets max connections with SSL + slow Message-ID: Hi guys, We set up a basic reverse proxy configuration (that you can find below this thread). Our main app is using websocket, and the reverse proxy works fine when no using SSL. But when SSL is enabled, we noticed a big performance issue making our app very slow. Moreover, the most important: we get a problem when reaching the 50th websocket alive connection for a given user: it crashes our app. Could you help us finding what's wrong in the following? App server conf: - ubuntu v.14.10 Nginx server conf: - nginx v1.9.0 - ubuntu v.14.04 and the conf file is the following: ################################### user nginx_user nginx_user; daemon off; worker_processes 2; pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 8443 ssl; server_name ourapp.com; ssl_certificate ../ssl/cacert.pem; ssl_certificate_key ../ssl/privkey.pem; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass http://ourapp.com:8800; } location /our_ws_location { proxy_pass http://ourapp.com:8801; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_set_header X-NginX-Proxy true; # WebSocket support proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } } } ################################### Thanks in advance, Regards, Z Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258666,258666#msg-258666 From francis at daoine.org Tue May 5 17:48:13 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 5 May 2015 18:48:13 +0100 Subject: how to separate robot access log and human access log In-Reply-To: References: <20150430174453.GH29618@daoine.org> Message-ID: <20150505174813.GQ29618@daoine.org> On Tue, May 05, 2015 at 08:20:35AM -0400, meteor8488 wrote: Hi there, > if $spiderbot=0, then log to location_access.log Set a variable which is non-zero when $spiderbot=0, and which is zero or blank otherwise. Use that as the access_log if=$variable for location_access.log. > if $spiderbot=1, then log to spider_access.log. Set a variable which is non-zero when $spiderbot=1, and which is zero or blank otherwise. ($spiderbot is probably perfect for this as-is.) Use that as the access_log if=$variable for spider_access.log. > And I don't want the same logs write to different files. For each loggable request, make sure that exactly one of your if=$variable variables is non-zero. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 5 17:55:39 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 5 May 2015 18:55:39 +0100 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <5e511134bb4675df4c8c760544c1a169.NginxMailingListEnglish@forum.nginx.org> References: <27f588573d1078a8531656430cf43b4d.NginxMailingListEnglish@forum.nginx.org> <5e511134bb4675df4c8c760544c1a169.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150505175539.GR29618@daoine.org> On Tue, May 05, 2015 at 09:07:41AM -0400, meteor8488 wrote: Hi there, > I tried to use "deny" to deny access from an IP. But it seems that it can > still access my server. > > In my http part: > > deny 69.85.92.0/23; > deny 69.85.93.235; A request comes in to nginx. nginx chooses one server{} block in its configuration to handle it. nginx chooses one location{} block in that server{} configuration to handle it. Only configuration directives in, or inherited into, that location{} are relevant. (If you use any rewrite-module directives, things may be different.) > 69.85.93.235 - - [05/May/2015:19:44:22 +0800] "GET /thread-1251687-1-1.html > HTTP/1.0" 302 154 "http://www.example.com" "Mozilla/5.0 (compatible; > Baiduspider/2.0; +http://www.baidu.com/search/spider.html)" > "123.125.71.107" What is the one location{} that handles this request? What "allow" and "deny" directives are in that location{}? And in the enclosing server{}? Can you provide a complete nginx.conf that shows the behaviour you report? (It doesn't have to be your production config. Something smaller that shows this problem on a test machine, may make obvious where the problem is.) Thanks, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 5 18:11:18 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 5 May 2015 19:11:18 +0100 Subject: Connection timeout from work, working anywhere else In-Reply-To: References: <20150504081045.GM29618@daoine.org> Message-ID: <20150505181118.GS29618@daoine.org> On Mon, May 04, 2015 at 02:19:45PM -0400, GuiPoM wrote: Hi there, > Thank you for your answer. I can reproduce on demand ! BUTI am not familiar > with nginx. > Could you give me some hints what to activate in order to provide useful > information for debugging ? You could follow http://nginx.org/en/docs/debugging_log.html to get all sorts of information out of nginx -- but I suspect that that will not be immediately useful. When things work, what is the sequence of requests made? access_log will have that. When things fail, what is the sequence of requests made? access_log will also have that. What is the first request in the sequence that fails, or otherwise does not get the expected response, in the "fail" case? Can you arrange a single "curl" command that works in one case and fails in the other? That may help you analyse where things go wrong. > Next follows the config. Hope it will help ! Once the "failing" request is identified, the matching server{} and location{} can be analysed to see what should happen. (If it turns out that the "failure" happens before the request gets to nginx -- for example, during ssl negotiation -- then the details of the request are less important.) But that config is presumably in one of the files mentioned in an "include" directive. > include /etc/nginx/conf.d/*.conf; > include /etc/nginx/sites-enabled/*; Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue May 5 20:00:11 2015 From: nginx-forum at nginx.us (GuiPoM) Date: Tue, 05 May 2015 16:00:11 -0400 Subject: Connection timeout from work, working anywhere else In-Reply-To: <20150505181118.GS29618@daoine.org> References: <20150505181118.GS29618@daoine.org> Message-ID: I will do so. Two questions: 1/ In my config file /etc/nginx/nginx.conf, in section http, I already have the logging entries defined : access_log /var/log/nginx/access.log; error_log /var/log/nginx/error.log; I just tried to put debug as for example : "access_log /var/log/nginx/access.log debug;" But: Restarting nginx: nginx: [emerg] unknown log format "debug" in /etc/nginx/nginx.conf:36 nginx: configuration file /etc/nginx/nginx.conf test failed I also tried to adapt to add the server section that I don't have in my own config file, as your link mention (http://nginx.org/en/docs/debugging_log.html#memory). I did the same in one mentionned file : default_ssl #access_log off; #error_log /usr/share/nginx/www/jeedom/log/nginx.error; access_log memory:32m debug; error_log memory:32m debug; Restarting nginx: nginx: [emerg] unknown log format "debug" in /etc/nginx/sites-enabled/default_ssl:8 nginx: configuration file /etc/nginx/nginx.conf test failed I a doing something wrong ? 2/ In error log, even if I can't set a debug level, there is already something strange I would like to change: 2015/05/02 13:25:05 [error] 2144#0: *4926 upstream prematurely closed connection while reading response header from upstream, client: XX.XX.XXX.XXX, server: , request: "GET /socket.io/?EIO=3&transport=polling&t=1430565786187-49&sid=IIJ1gX_E4Ny_ojN8AACB HTTP/1.1", upstream: "http://127.0.0.1:8070/socket.io/?EIO=3&transport=polling&t=1430565786187-49&sid=IIJ1gX_E4Ny_ojN8AACB", host: "hostname.dtdns.net:9876", referrer: "https://hostname.dtdns.net:9876/jeedom/index.php?v=m&" How host and referrer can be filled with a dynamic dns name ? How nginx is aware of this information ? I am requesting with an IP address, so no chance this information come from the sender. Could this configuration be erroneous ? (/etc/nginx/sites-enabled/default_ssl) location /socket.io/ { proxy_pass http://127.0.0.1:8070/socket.io/; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "Upgrade"; proxy_set_header Host $host; proxy_redirect off; proxy_read_timeout 6000; } Thx. GuiPoM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258187,258670#msg-258670 From francis at daoine.org Tue May 5 21:06:49 2015 From: francis at daoine.org (Francis Daly) Date: Tue, 5 May 2015 22:06:49 +0100 Subject: Connection timeout from work, working anywhere else In-Reply-To: References: <20150505181118.GS29618@daoine.org> Message-ID: <20150505210649.GT29618@daoine.org> On Tue, May 05, 2015 at 04:00:11PM -0400, GuiPoM wrote: Hi there, > I just tried to put debug as for example : "access_log > /var/log/nginx/access.log debug;" > > But: > Restarting nginx: nginx: [emerg] unknown log format "debug" in > /etc/nginx/nginx.conf:36 > nginx: configuration file /etc/nginx/nginx.conf test failed Does "nginx -V" show "--with-debug"? "debug" is for error_log, not for access_log. > 2/ In error log, even if I can't set a debug level, there is already > something strange I would like to change: > > 2015/05/02 13:25:05 [error] 2144#0: *4926 upstream prematurely closed > connection while reading response header from upstream, client: > XX.XX.XXX.XXX, server: , request: "GET > /socket.io/?EIO=3&transport=polling&t=1430565786187-49&sid=IIJ1gX_E4Ny_ojN8AACB > HTTP/1.1", upstream: > "http://127.0.0.1:8070/socket.io/?EIO=3&transport=polling&t=1430565786187-49&sid=IIJ1gX_E4Ny_ojN8AACB", > host: "hostname.dtdns.net:9876", referrer: > "https://hostname.dtdns.net:9876/jeedom/index.php?v=m&" > > How host and referrer can be filled with a dynamic dns name ? How nginx is > aware of this information ? I am requesting with an IP address, so no chance > this information come from the sender. When you copy-paste the commands issued and the responses gathered, it may become clearer where all of the information is coming from. My guess is that you are issuing one request with an ip address, and that is returning a http redirect to a hostname; and then you are issuing the next request to that hostname. But until you show your work, all anyone can do here is guess. > Could this configuration be erroneous ? > (/etc/nginx/sites-enabled/default_ssl) > > location /socket.io/ { > proxy_pass http://127.0.0.1:8070/socket.io/; > proxy_http_version 1.1; > proxy_set_header Upgrade $http_upgrade; > proxy_set_header Connection "Upgrade"; > proxy_set_header Host $host; > proxy_redirect off; > proxy_read_timeout 6000; > } This looks like the connection is using WebSockets. Does your proxy server at work allow WebSocket connections to pass through it? Can you successfully connect to any WebSocket service anywhere from work? If not, the problem may not be on the nginx side. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Tue May 5 23:05:59 2015 From: nginx-forum at nginx.us (meteor8488) Date: Tue, 05 May 2015 19:05:59 -0400 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <20150505175539.GR29618@daoine.org> References: <20150505175539.GR29618@daoine.org> Message-ID: <1d9c112d2c17c25f024aa99a2acca18d.NginxMailingListEnglish@forum.nginx.org> Hi Francis, I put the "deny" directives in http{} part. Here is my nginx.conf. http { deny 4.176.128.153; deny 23.105.85.0/24; deny 36.44.146.99; deny 42.62.36.167; deny 42.62.74.0/24; deny 50.116.28.209; deny 50.116.30.23; deny 52.0.0.0/11; deny 54.72.0.0/13; deny 54.80.0.0/12; deny 54.160.0.0/12; deny 54.176.0.0/12; deny 54.176.195.13; deny 54.193.0.0/16; deny 54.193.212.129; deny 54.208.0.0/15; deny 54.212.0.0/15; deny 54.219.0.0/16; deny 54.224.0.0/12; deny 58.208.0.0/12; deny 61.135.219.2; deny 61.173.11.234; deny 61.177.134.164; deny 61.178.110.42; deny 69.85.92.0/23; deny 69.85.93.235; deny 101.226.62.63; deny 101.226.167.237; deny 101.226.168.225; deny 101.231.74.38; deny 101.231.74.40; deny 103.19.84.0/22; deny 106.186.112.0/21; deny 111.20.18.224; deny 111.20.19.148; deny 111.67.200.68; deny 112.90.51.35; deny 112.235.133.139; deny 113.74.83.46; deny 113.120.156.252; deny 114.80.109.30; deny 114.80.116.164; deny 114.86.54.43; deny 114.87.109.129; deny 114.112.103.46; deny 115.226.236.69; deny 116.7.169.91; deny 116.208.12.74; deny 116.228.41.122; deny 116.232.27.33; deny 116.234.130.64; deny 117.27.152.197; deny 117.27.152.198; deny 117.151.97.223; deny 118.144.32.66; deny 119.85.190.7; deny 119.147.225.177; deny 119.254.64.12; deny 119.254.86.240; deny 119.254.86.246; deny 121.202.22.154; deny 122.4.149.168; deny 122.49.5.11; deny 122.49.5.14; deny 122.49.5.15; deny 122.96.36.167; deny 123.151.176.198; deny 124.156.6.198; deny 124.226.42.78; deny 125.125.41.167; deny 128.199.153.220; deny 128.199.78.7; deny 136.243.36.95; deny 139.200.132.233; deny 171.108.67.30; deny 171.112.242.65; deny 174.2.171.84; deny 180.153.72.92; deny 180.153.211.148; deny 180.153.229.0/24; deny 180.171.146.137; deny 182.16.44.26; deny 182.33.66.29; deny 182.41.45.241; deny 182.240.7.79; deny 183.8.83.248; deny 183.129.200.250; deny 183.156.102.146; deny 183.156.108.133; deny 183.157.68.141; deny 183.250.40.194; deny 188.143.232.40; deny 188.143.232.72; deny 198.58.96.215; deny 198.58.99.82; deny 198.58.102.117; deny 198.58.102.155; deny 198.58.102.156; deny 198.58.102.158; deny 198.58.102.49; deny 198.58.102.95; deny 198.58.102.96; deny 198.58.103.102; deny 198.58.103.114; deny 198.58.103.115; deny 198.58.103.158; deny 198.58.103.160; deny 198.58.103.28; deny 198.58.103.36; deny 198.58.103.91; deny 198.58.103.92; deny 202.1.232.243; deny 203.195.219.37; deny 204.236.128.0/17; deny 209.141.40.22; deny 211.97.148.191; deny 218.148.90.164; deny 220.240.235.158; deny 222.73.68.103; deny 222.95.129.93; deny 222.175.185.14; deny 222.175.186.18; geo $geo { ranges; 111.67.200.68-111.67.200.68 badip; 58.213.119.20-58.213.119.21 badip; 54.208.0.0-54.209.255.255 badip; 54.176.0.0-54.191.255.255 badip; 54.219.0.0-54.219.255.255 badip; 54.193.0.0-54.193.255.255 badip; 54.160.0.0-54.175.255.255 badip; 106.145.17.0-106.145.17.255 badip; 112.235.133.139-112.235.133.139 spider; 5.255.253.77-5.255.253.77 spider; 69.85.93.235-69.85.93.235 spider; 54.160.105.130-54.160.105.130 spider; 95.108.158.146-95.108.158.146 spider; 131.253.21.0-131.253.47.255 spider; 157.54.0.0-157.60.255.255 spider; 202.160.176.0-202.160.191.255 spider; 207.46.0.0-207.46.255.255 spider; 207.68.128.0-207.68.207.255 spider; 209.191.64.0-209.191.127.255 spider; 209.85.128.0-209.85.255.255 spider; 216.239.32.0-216.239.63.255 spider; 64.233.160.0-64.233.191.255 spider; 64.4.0.0-64.4.63.255 spider; 65.52.0.0-65.55.255.255 spider; 66.102.0.0-66.102.15.255 spider; 66.196.64.0-66.196.127.255 spider; 66.228.160.0-66.228.191.255 spider; 66.249.64.0-66.249.95.255 spider; 67.195.0.0-67.195.255.255 spider; 68.142.192.0-68.142.255.255 spider; 72.14.192.0-72.14.255.255 spider; 72.30.0.0-72.30.255.255 spider; 74.125.0.0-74.125.255.255 spider; 74.6.0.0-74.6.255.255 spider; 8.12.144.0-8.12.144.255 spider; 98.136.0.0-98.139.255.255 spider; 203.208.32.0-203.208.63.255 spider; } map $request_method $bad_method { default 1; ~(?i)(GET|HEAD|POST) 0; } map $http_referer $bad_referer { default 0; ~(?i)(babes|click|forsale|jewelry|nudit|organic|poker|porn|amnesty|poweroversoftware|webcam|zippo|casino|replica|CDR) 1; } map $query_string $spam { default 0; ~*"\b(ultram|unicauca|valium|viagra|vicodin|xanax|ypxaieo)\b" 1; ~*"\b(erections|hoodia|huronriveracres|impotence|levitra|libido)\b" 1; ~*"\b(ambien|blue\spill|cialis|cocaine|ejaculation|erectile)\b" 1; ~*"\b(lipitor|phentermin|pro[sz]ac|sandyauer|tramadol|troyhamby)\b" 1; } map $http_x_forwarded_for $xf { default 1; "" 0; } map $http_user_agent $fakebots { default 0; "~*bot" $xf; "~*bing" $xf; "~*search" $xf; "~*Baidu" $xf; } map $http_user_agent $ifbot { default 0; "~*rogerbot" 3; "~*ChinasoSpider" 3; "~*Yahoo" 1; "~*archive" 1; "~*search" 1; "~*Googlebot" 1; "~Mediapartners-Google" 1; "~*bingbot" 1; "~*YandexBot" 1; "~*Baiduspider" 1; "~*Feedly" 2; "~*Superfeedr" 2; "~*QuiteRSS" 2; "~*g2reader" 2; "~*Digg" 2; "~*AhrefsBot" 3; "~*ia_archiver" 3; "~*trendiction" 3; "~*AhrefsBot" 3; "~*curl" 3; "~*Ruby" 3; "~*Player" 3; "~*Go\ http\ package" 3; "~*Lynx" 3; "~*Sleuth" 3; "~*Python" 3; "~*Wget" 3; "~*perl" 3; "~*httrack" 3; "~*JikeSpider" 3; "~*PHP" 3; "~*WebIndex" 3; "~*magpie-crawler" 3; "~*JUC" 3; "~*Scrapy" 3; "~*libfetch" 3; "~*WinHTTrack" 3; "~*htmlparser" 3; "~*urllib" 3; "~*Zeus" 3; "~*scan" 3; "~*Indy\ Library" 3; "~*libwww-perl" 3; "~*GetRight" 3; "~*GetWeb!" 3; "~*Go!Zilla" 3; "~*Go-Ahead-Got-It" 3; "~*Download\ Demon" 3; "~*TurnitinBot" 3; "~*WebscanSpider" 3; "~*WebBench" 3; "~*YisouSpider" 3; "~*check_http" 3; "~*webmeup-crawler" 3; "~*omgili" 3; "~*blah" 3; "~*fountainfo" 3; "~*MicroMessenger" 3; "~*QQDownload" 3; "~*shoulu.jike.com" 3; "~*omgilibot" 3; "~*pyspider" 3; "~*mysite" 3; } ...... server { listen 80 accept_filter=httpready; index index.html index.htm index.php; access_log /var/log/server_access.log main; location / { root /var/www; if ( $geo = "badip" ) { return 444; } if ( $geo = "spider" ) { set $spiderip 1; } if ($bad_method = 1) { return 444; } if ($spam = 1) { return 444; } set $humanfilter 0; if ($ifbot = "0") { set $humanfilter 1; } if ( $request_uri !~ "~mod\=swfupload\&action\=swfupload" ) { set $humanfilter "${humanfilter}1"; } if ($humanfilter = "11"){ rewrite_by_lua ' local random = ngx.var.cookie_random if(random == nil) then random = math.random(999999) end local token = ngx.md5("guessguess" .. ngx.var.remote_addr .. random) if (ngx.var.cookie_token ~= token) then ngx.header["Set-Cookie"] = {"token=" .. token, "random=" .. random} return ngx.redirect(ngx.var.scheme .. "://" .. ngx.var.host .. ngx.var.request_uri) end '; } if ($ifbot = "1") { set $spiderbot 1; } if ($ifbot = "2") { set $rssbot 1; } if ($ifbot = "3") { return 444; } if ($fakebots) { return 444; } if ($bad_referer = 1) { return 410; } location ~ \.php$ { try_files $uri =404; fastcgi_pass backend; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; access_log /web/log/php.log main; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258659,258672#msg-258672 From tfransosi at gmail.com Wed May 6 02:10:49 2015 From: tfransosi at gmail.com (Thiago Farina) Date: Tue, 5 May 2015 23:10:49 -0300 Subject: FastCGI sent in stderr: "Primary script unknown" In-Reply-To: <9c288334a690d84938520108d7311843.NginxMailingListEnglish@forum.nginx.org> References: <9c288334a690d84938520108d7311843.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, On Tue, May 5, 2015 at 7:01 AM, vincent123456 wrote: > Hi, > > I try to configure a vhost with Nginx and PHP-FPM. > > I have an application with Symfony2.6, i followed this tutorial : > http://symfony.com/doc/current/cookbook/configuration/web_server_configuration.html#nginx > This tutorial has helped me -> https://www.digitalocean.com/community/tutorials/how-to-install-linux-nginx-mysql-php-lemp-stack-on-debian-7. Maybe it can help you as well. Hope that helps, Regards, -- Thiago Farina From igor.katson at gmail.com Wed May 6 04:36:26 2015 From: igor.katson at gmail.com (Igor Katson) Date: Tue, 5 May 2015 21:36:26 -0700 Subject: Unbuffered POST requests and/or uWSGI in nginx 1.8 Message-ID: Hi, it's stated in a lot of IT news websites, that nginx 1.8 supports unbuffered uploads. However, I could not find it in thechangelog , and also, did not find any new relevant options in the nginx core module options. Is it really so? If yes, how do you enable unbuffered uploads? Is that supported for uwsgi? Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Wed May 6 05:07:05 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 6 May 2015 08:07:05 +0300 Subject: Unbuffered POST requests and/or uWSGI in nginx 1.8 In-Reply-To: References: Message-ID: <0084A6CA-5AAB-4A0B-9B66-61E2143858B5@nginx.com> Hello, On 06 May 2015, at 07:36, Igor Katson wrote: > Hi, > > it's stated in a lot of IT news websites, that nginx 1.8 supports unbuffered uploads. However, I could not find it in thechangelog, and also, did not find any new relevant options in the nginx core module options. > Is it really so? If yes, how do you enable unbuffered uploads? Is that supported for uwsgi? http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_request_buffering -- Roman Arutyunyan From 15000584731 at 163.com Wed May 6 10:07:15 2015 From: 15000584731 at 163.com (15000584731) Date: Wed, 6 May 2015 18:07:15 +0800 (CST) Subject: proxy_cache Message-ID: <52888e34.27ed9.14d28b19361.Coremail.15000584731@163.com> Dears: when i configuring nginx proxy_cache,some potential problems happend? and I could not get a correct result.There is no files in my proxy_cache_path and the webpage's head of x-cache is MISS! I have no idea....can u help me,related configuration files and information will be shown below: -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: cache_path.png Type: image/png Size: 6303 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: disk.png Type: image/png Size: 5045 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mem.png Type: image/png Size: 4730 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 1.png Type: image/png Size: 69195 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: ??1.png Type: image/png Size: 79992 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.aaron.inboundmarketing.cn.conf Type: application/octet-stream Size: 938 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1930 bytes Desc: not available URL: From vbart at nginx.com Wed May 6 11:19:12 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 06 May 2015 14:19:12 +0300 Subject: Unbuffered POST requests and/or uWSGI in nginx 1.8 In-Reply-To: References: Message-ID: <1470417.VA3RNsVuRn@vbart-workstation> On Tuesday 05 May 2015 21:36:26 Igor Katson wrote: > Hi, > > it's stated in a lot of IT news websites, that nginx 1.8 supports > unbuffered uploads. However, I could not find it in thechangelog > , and also, did not find any new relevant > options in the nginx core module options. > > Is it really so? If yes, how do you enable unbuffered uploads? Is that > supported for uwsgi? > http://nginx.org/r/uwsgi_request_buffering uwsgi_request_buffering off; wbr, Valentin V. Bartenev From francis at daoine.org Wed May 6 18:46:34 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 6 May 2015 19:46:34 +0100 Subject: How to block fake google spider and fake web browser access? In-Reply-To: <1d9c112d2c17c25f024aa99a2acca18d.NginxMailingListEnglish@forum.nginx.org> References: <20150505175539.GR29618@daoine.org> <1d9c112d2c17c25f024aa99a2acca18d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150506184634.GA3287@daoine.org> On Tue, May 05, 2015 at 07:05:59PM -0400, meteor8488 wrote: Hi there, > location / { > root /var/www; > > if ( $geo = "badip" ) { > return 444; > } > if ( $geo = "spider" ) { > set $spiderip 1; > } http://wiki.nginx.org/IfIsEvil You are using "if" inside "location" and doing something other than "return". That combination makes it too hard for me to understand what is happening. I won't be surprised to learn that that combination is the reason your "deny" directives do not act the way you want them to. It looks to me like you can safely move all of these "if"s to server{} level, outside the location{}. If you do that, does it change the response that you get at all? Cheers, f -- Francis Daly francis at daoine.org From sarah at nginx.com Wed May 6 19:04:24 2015 From: sarah at nginx.com (Sarah Novotny) Date: Wed, 6 May 2015 12:04:24 -0700 Subject: nginx.conf 2015 CFP is open Message-ID: nginx.conf 2015 Join us at Fort Mason in San Francisco from September 22-24, 2015. Submit a proposal to nginx.conf 2015! TL;DR ? Speaker proposals due: 11:59 PM PDT, June 2, 2015 ? Speakers notified: early July, 2015 ? Program schedule announced: late July, 2015 As a member of the NGINX community, you?re probably passionate about web performance, security, reliability, and scale. We?re excited to offer you the opportunity to teach (and learn from) your peers as a speaker at nginx.conf 2015 (September 22-24 in San Francisco). Please share with us how you and your company make the web speed along, instantly offering our always-on-society highly personalized and ever more creative experiences. Tell us how you solved an intractable scaling problem or shaved milliseconds (or seconds) off an RTT. Blog Post - http://nginx.com/blog/nginx-conf-2015-call-proposals-now-open/ CFP - https://nginxconf15.busyconf.com/proposals/new Twitter - https://twitter.com/nginxorg/status/596012137610260481 We want to hear your NGINX story! Sarah From francis at daoine.org Wed May 6 21:58:38 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 6 May 2015 22:58:38 +0100 Subject: error running shared postrotate script In-Reply-To: References: Message-ID: <20150506215838.GB3287@daoine.org> On Sun, May 03, 2015 at 09:56:09AM -0700, Paul N. Pace wrote: Hi there, > Ever since upgrading to 1.8.0 I get the following report from Cron: I suspect that this problem is outside the nginx binary, but may be due to other files added as part of the package installation. > /etc/cron.daily/logrotate: > error: error running shared postrotate script for '/var/log/nginx/*.log ' > error: error running shared postrotate script for '/var/ > www.example.com/logs/*.log ' > run-parts: /etc/cron.daily/logrotate exited with return code 1 What output do you get when running invoke-rc.d nginx rotate as the appropriate user? > There is a bug report (dated 2015-05-01) at Launchpad that appears > identical to my issue: > https://bugs.launchpad.net/nginx/+bug/1450770 That bug report asks the same question. The answer there suggests that something changed in the init script. > Are there any workarounds or configuration changes to correct this issue? Perhaps revert to the previous init script; perhaps change your logrotate script to do what is needed (send a USR1 signal to the nginx master process) using whatever new means your system provides? I think that both of those scripts come from the packager, so that might be the best place to get a "will work on next update" solution. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 6 22:19:38 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 6 May 2015 23:19:38 +0100 Subject: FastCGI sent in stderr: "Primary script unknown" In-Reply-To: <9c288334a690d84938520108d7311843.NginxMailingListEnglish@forum.nginx.org> References: <9c288334a690d84938520108d7311843.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150506221938.GC3287@daoine.org> On Tue, May 05, 2015 at 06:01:26AM -0400, vincent123456 wrote: Hi there, > I have this error : > 2015/05/05 11:48:32 [error] 5181#0: *5 FastCGI sent in stderr: "Primary > script unknown" while reading response header from upstream, client: > 127.0.0.1, server: myserver.local, request: "GET / HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "myserver.local" That error pretty much always means that the fastcgi server cannot find the file that it thinks it was told to use. Can you see in your fastcgi server logs what file the fastcgi server thinks that was? If not, you may be able to see in your nginx error logs, at debug level, what fastcgi param values were sent. (You may see these using "tcpdump" too, if that is simpler to set up.) The specific fastcgi param that the fastcgi server uses depends on the server, but it is commonly SCRIPT_FILENAME. If more that one value is sent, which one the server chooses depends on the server. The filename that nginx sends in the fastcgi param must be from the perspective of the fastcgi server -- so if the fastcgi server runs in a chroot, what nginx sends must be adapted. If you "ls -ld" every directory on the path to the file, do they all have "x" permission for the fastcgi-server user? Good luck with it, f -- Francis Daly francis at daoine.org From dewanggaba at xtremenitro.org Thu May 7 02:38:39 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 07 May 2015 09:38:39 +0700 Subject: Reverse proxy configuration on el7 Message-ID: <554AD02F.5050608@xtremenitro.org> Hello! Did anyone have same problem when configuring reverse proxy nginx + apache, when the request came from nginx, the IP didn't shows real visitor. Example access.log: 127.0.0.1 - - [07/May/2015:09:27:30 +0700] "GET / HTTP/1.0" 200 61925 127.0.0.1 - - [07/May/2015:09:27:35 +0700] "GET / HTTP/1.0" 200 61925 127.0.0.1 - - [07/May/2015:09:27:43 +0700] "GET / HTTP/1.0" 200 62367 My proxy config: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffers 32 4k; In centos6, I got additional packages like mod_rpaf / mod_extract_forwarded. But I didn't find any similiar packages on centos7. Any hints? From nurahmadie at gmail.com Thu May 7 02:45:11 2015 From: nurahmadie at gmail.com (Nurahmadie Nurahmadie) Date: Thu, 7 May 2015 11:45:11 +0900 Subject: Reverse proxy configuration on el7 In-Reply-To: <554AD02F.5050608@xtremenitro.org> References: <554AD02F.5050608@xtremenitro.org> Message-ID: Hi On Thu, May 7, 2015 at 11:38 AM, Dewangga Bachrul Alam < dewanggaba at xtremenitro.org> wrote: > Hello! > > Did anyone have same problem when configuring reverse proxy nginx + > apache, when the request came from nginx, the IP didn't shows real visitor. > > Example access.log: > 127.0.0.1 - - [07/May/2015:09:27:30 +0700] "GET / HTTP/1.0" 200 61925 > 127.0.0.1 - - [07/May/2015:09:27:35 +0700] "GET / HTTP/1.0" 200 61925 > 127.0.0.1 - - [07/May/2015:09:27:43 +0700] "GET / HTTP/1.0" 200 62367 > > My proxy config: > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > client_body_buffer_size 128k; > proxy_connect_timeout 90; > proxy_send_timeout 90; > proxy_read_timeout 90; > proxy_buffers 32 4k; > > In centos6, I got additional packages like mod_rpaf / > mod_extract_forwarded. But I didn't find any similiar packages on centos7. > > Any hints? > You don't have to use both X-Real-IP and X-Forwarded-For. Just put the one which actually used by the app. And it's safer to also use $remote_addr for X-Forwarded-For rather than $proxy_add_x_forwarded_for, since that header can be manipulated by the client. For the log, check your log format at apache, it probably logging remote_addr (or something like that, not sure what they call it at apache) rather than the IP specified by X-Forwarded-For or X-Real-IP. Change it accordingly. > ___________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From biru at yahoo.com Thu May 7 02:49:56 2015 From: biru at yahoo.com (Aron) Date: Thu, 7 May 2015 02:49:56 +0000 (UTC) Subject: Reverse proxy configuration on el7 In-Reply-To: <554AD02F.5050608@xtremenitro.org> References: <554AD02F.5050608@xtremenitro.org> Message-ID: <689271840.2044604.1430966996138.JavaMail.yahoo@mail.yahoo.com> Hi , You must configure "X-Forwarded-For " in the apache log format to get real IP client. ?Regards Aron On Thursday, May 7, 2015 9:39 AM, Dewangga Bachrul Alam wrote: Hello! Did anyone have same problem when configuring reverse proxy nginx + apache, when the request came from nginx, the IP didn't shows real visitor. Example access.log: 127.0.0.1 - - [07/May/2015:09:27:30 +0700] "GET / HTTP/1.0" 200 61925 127.0.0.1 - - [07/May/2015:09:27:35 +0700] "GET / HTTP/1.0" 200 61925 127.0.0.1 - - [07/May/2015:09:27:43 +0700] "GET / HTTP/1.0" 200 62367 My proxy config: proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto https; client_body_buffer_size 128k; proxy_connect_timeout? 90; proxy_send_timeout? ? ? 90; proxy_read_timeout? ? ? 90; proxy_buffers? ? ? ? ? 32 4k; In centos6, I got additional packages like mod_rpaf / mod_extract_forwarded. But I didn't find any similiar packages on centos7. Any hints? _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Thu May 7 03:07:07 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 07 May 2015 10:07:07 +0700 Subject: Reverse proxy configuration on el7 In-Reply-To: References: <554AD02F.5050608@xtremenitro.org> Message-ID: <554AD6DB.8030901@xtremenitro.org> Hello! On 05/07/2015 09:45 AM, Nurahmadie Nurahmadie wrote: > Hi > > On Thu, May 7, 2015 at 11:38 AM, Dewangga Bachrul Alam > > wrote: > > Hello! > > Did anyone have same problem when configuring reverse proxy nginx + > apache, when the request came from nginx, the IP didn't shows real > visitor. > > Example access.log: > 127.0.0.1 - - [07/May/2015:09:27:30 +0700] "GET / HTTP/1.0" 200 61925 > 127.0.0.1 - - [07/May/2015:09:27:35 +0700] "GET / HTTP/1.0" 200 61925 > 127.0.0.1 - - [07/May/2015:09:27:43 +0700] "GET / HTTP/1.0" 200 62367 > > My proxy config: > proxy_redirect off; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto https; > client_body_buffer_size 128k; > proxy_connect_timeout 90; > proxy_send_timeout 90; > proxy_read_timeout 90; > proxy_buffers 32 4k; > > In centos6, I got additional packages like mod_rpaf / > mod_extract_forwarded. But I didn't find any similiar packages on > centos7. > > Any hints? > > > You don't have to use both X-Real-IP and X-Forwarded-For. Just put the > one which actually used by the app. > I just test using $_SERVER['REMOTE_ADDR']; and its only shows 127.0.0.1. Anyway, it's should be fine to use them both (CMIIW). But I've tried it and nothing changes, the visitors ips are not showed on apache logs. For additional information, I set the apache listen only to 127.0.0.1:8080 and set the proxy pass to http://127.0.0.1:8080; > And it's safer to also use $remote_addr for X-Forwarded-For rather > than $proxy_add_x_forwarded_for, since that header can be manipulated by > the client. > > For the log, check your log format at apache, it probably logging > remote_addr (or something like that, not sure what they call it at > apache) rather than the IP specified by X-Forwarded-For or X-Real-IP. > Change it accordingly. > > Didn't know yet, you have any hints? :) From nurahmadie at gmail.com Thu May 7 03:11:34 2015 From: nurahmadie at gmail.com (Nurahmadie Nurahmadie) Date: Thu, 7 May 2015 12:11:34 +0900 Subject: Reverse proxy configuration on el7 In-Reply-To: <554AD6DB.8030901@xtremenitro.org> References: <554AD02F.5050608@xtremenitro.org> <554AD6DB.8030901@xtremenitro.org> Message-ID: On Thu, May 7, 2015 at 12:07 PM, Dewangga Bachrul Alam < dewanggaba at xtremenitro.org> wrote: > Hello! > > On 05/07/2015 09:45 AM, Nurahmadie Nurahmadie wrote: > > Hi > > > > On Thu, May 7, 2015 at 11:38 AM, Dewangga Bachrul Alam > > > wrote: > > > > Hello! > > > > Did anyone have same problem when configuring reverse proxy nginx + > > apache, when the request came from nginx, the IP didn't shows real > > visitor. > > > > Example access.log: > > 127.0.0.1 - - [07/May/2015:09:27:30 +0700] "GET / HTTP/1.0" 200 61925 > > 127.0.0.1 - - [07/May/2015:09:27:35 +0700] "GET / HTTP/1.0" 200 61925 > > 127.0.0.1 - - [07/May/2015:09:27:43 +0700] "GET / HTTP/1.0" 200 62367 > > > > My proxy config: > > proxy_redirect off; > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto https; > > client_body_buffer_size 128k; > > proxy_connect_timeout 90; > > proxy_send_timeout 90; > > proxy_read_timeout 90; > > proxy_buffers 32 4k; > > > > In centos6, I got additional packages like mod_rpaf / > > mod_extract_forwarded. But I didn't find any similiar packages on > > centos7. > > > > Any hints? > > > > > > You don't have to use both X-Real-IP and X-Forwarded-For. Just put the > > one which actually used by the app. > > > > I just test using $_SERVER['REMOTE_ADDR']; and its only shows 127.0.0.1. > The remote_addr will always shows 127.0.0.1 since apache is requested by nginx, which also binds on 127.0.0.1, not directly by users. > > > Anyway, it's should be fine to use them both (CMIIW). But I've tried it > and nothing changes, the visitors ips are not showed on apache logs. > > For additional information, I set the apache listen only to > 127.0.0.1:8080 and set the proxy pass to http://127.0.0.1:8080; > > > And it's safer to also use $remote_addr for X-Forwarded-For rather > > than $proxy_add_x_forwarded_for, since that header can be manipulated by > > the client. > > > > For the log, check your log format at apache, it probably logging > > remote_addr (or something like that, not sure what they call it at > > apache) rather than the IP specified by X-Forwarded-For or X-Real-IP. > > Change it accordingly. > > > > > > Didn't know yet, you have any hints? :) > As I stated before, you want to change your log format to shows ip from either X-Forwarded-For or X-Real-IP > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- regards, Nurahmadie -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From dewanggaba at xtremenitro.org Thu May 7 03:49:55 2015 From: dewanggaba at xtremenitro.org (Dewangga Bachrul Alam) Date: Thu, 07 May 2015 10:49:55 +0700 Subject: Reverse proxy configuration on el7 In-Reply-To: References: <554AD02F.5050608@xtremenitro.org> <554AD6DB.8030901@xtremenitro.org> Message-ID: <554AE0E3.2030108@xtremenitro.org> Hello! Recently discovered by my self, since apache 2.4.1 or latest, it was bundled with mod_remoteip. So, we didn't need any additional modules like mod_rpaf or mod_extract_forwarded. On 05/07/2015 10:11 AM, Nurahmadie Nurahmadie wrote: > > On Thu, May 7, 2015 at 12:07 PM, Dewangga Bachrul Alam > > wrote: > > Hello! > > On 05/07/2015 09:45 AM, Nurahmadie Nurahmadie wrote: > > Hi > > > > On Thu, May 7, 2015 at 11:38 AM, Dewangga Bachrul Alam > > > >> wrote: > > > > Hello! > > > > Did anyone have same problem when configuring reverse proxy > nginx + > > apache, when the request came from nginx, the IP didn't shows real > > visitor. > > > > Example access.log: > > 127.0.0.1 - - [07/May/2015:09:27:30 +0700] "GET / HTTP/1.0" > 200 61925 > > 127.0.0.1 - - [07/May/2015:09:27:35 +0700] "GET / HTTP/1.0" > 200 61925 > > 127.0.0.1 - - [07/May/2015:09:27:43 +0700] "GET / HTTP/1.0" > 200 62367 > > > > My proxy config: > > proxy_redirect off; > > proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > > proxy_set_header X-Forwarded-Proto https; > > client_body_buffer_size 128k; > > proxy_connect_timeout 90; > > proxy_send_timeout 90; > > proxy_read_timeout 90; > > proxy_buffers 32 4k; > > > > In centos6, I got additional packages like mod_rpaf / > > mod_extract_forwarded. But I didn't find any similiar packages on > > centos7. > > > > Any hints? > > > > > > You don't have to use both X-Real-IP and X-Forwarded-For. Just put the > > one which actually used by the app. > > > > I just test using $_SERVER['REMOTE_ADDR']; and its only shows 127.0.0.1. > > > The remote_addr will always shows 127.0.0.1 since apache is requested by > nginx, which also binds on 127.0.0.1, not directly by users. > > > Anyway, it's should be fine to use them both (CMIIW). But I've tried it > and nothing changes, the visitors ips are not showed on apache logs. > > For additional information, I set the apache listen only to > 127.0.0.1:8080 and set the proxy pass to > http://127.0.0.1:8080; > > > And it's safer to also use $remote_addr for X-Forwarded-For rather > > than $proxy_add_x_forwarded_for, since that header can be manipulated by > > the client. > > > > For the log, check your log format at apache, it probably logging > > remote_addr (or something like that, not sure what they call it at > > apache) rather than the IP specified by X-Forwarded-For or X-Real-IP. > > Change it accordingly. > > > > > > Didn't know yet, you have any hints? :) > > > As I stated before, you want to change your log format to shows ip from > either X-Forwarded-For or X-Real-IP > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > regards, > Nurahmadie > -- > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Thu May 7 13:31:51 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 May 2015 16:31:51 +0300 Subject: proxy_ssl_certificate not exchanging client certificates In-Reply-To: <934284b6217affa1c751e2ab463b0d1f.NginxMailingListEnglish@forum.nginx.org> References: <20150429120504.GU32429@mdounin.ru> <934284b6217affa1c751e2ab463b0d1f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150507133151.GM98215@mdounin.ru> Hello! On Wed, Apr 29, 2015 at 05:09:26PM -0400, lieut_data wrote: > Thanks for getting back to me so quickly! > > Maxim Dounin Wrote: > ------------------------------------------------------- > > What nginx doesn't support (or, rather, explicitly forbids) is > > renegotiation. On the other hand, renegotiation is required if > > one needs to ask for a client certificate only for some URIs, so > > it's likely used in your case. You should see something like "SSL > > renegotiation disabled" in logs at notice level. > > Yes, this is exactly the problem. With your hint, I commented out the > relevant code in ngx_ssl_handshake and ngx_ssl_handle_recv -- and proxying > worked flawlessly. (Interestingly, I never saw the log you identified > because of SSL3_FLAGS_NO_RENEGOTIATE_CIPHERS having been set on the openssl > connection object.) > > I think I understand the gist of why nginx forbids client-initiated > renegotiation (denial of service concerns? security concerns?), but I'm not > well-versed in openssl enough to know if the same concerns apply to > server-initiated renegotiation with nginx as the client, especially when it > applies to cipher renegotiation as noted above. > > Would nginx be open to a patch that would make this use case feasible? > Perhaps as a modification to only disable these renegotiations when nginx is > the server in the SSL equation? The renegotiation is disabled for security reasons since CVE-2009-3555. While CVE-2009-3555 is believed to be mitigated by Secure Renegotiation extension, renegotiation itself, even secure, still allows various bad side effects if allowed: in particular, peer credentials and/or ciphers used may be changed unexpectedly. I don't think we care much about the above happening on an upstream connection though, so patches are welcome. (Actually, original patch I submitted back in 2009 did not touch upstream connections at all, but Igor decided to disable renegotiation completely.) -- Maxim Dounin http://nginx.org/ From shahzaib.cb at gmail.com Thu May 7 13:38:23 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 7 May 2015 18:38:23 +0500 Subject: Nginx gets halt on 15K connections !! In-Reply-To: References: Message-ID: Hi, It looks like we made the false calculation based on entertaining concurrent connections per seconds and worker_connections limit was set to be very low. I've increased this limit to 16000 and issue looks to be fixed. Here's the mechanism i used to calculate concurrent connections/sec: worker_processes * worker_connections / keepalive_timeout == concurrent connections per second Concurrent connections on our server is around 15K . Based on this i used the following values : 48 * 16000 / 15 == 51200/sec Can somebody point to me if the calculation method is false ? Regards. Shahzaib On Sun, May 3, 2015 at 3:11 AM, shahzaib shahzaib wrote: > Experts, > > Could you please do me a favor in order to solve this problem ? > > Regards. > Shahzaib > > On Sat, May 2, 2015 at 3:32 PM, shahzaib shahzaib > wrote: > >> Hi, >> >> We've been running nginx-1.8 instance on one of our media server to >> serve big static .mp4 files as well as small files such as .jpeg. Nginx is >> serving well under 13K connections/sec with 800Mbps outgoing network load >> but whenever requests exceed 15K connections, nginx gets halt and 'D' >> status goes all over around the nginx workers, as well as network load >> drops down to 400Mbps due to which video streaming gets stuck and after >> 5-10 minutes load starts dropping and nginx starts stabilizing again as >> well as network load gets back to 800Mbps. We've been encountering this >> fluctuating situation on each 15minutes gap (Probably). >> >> We know that 'D' status is most likely due to high Disk I/O and to >> ensure that the disk i/o could be the problem under 15K connections, we had >> enabled apache on port 8080 for testing same video stream during high load >> and buffered on apache, well the stream was fluctuating a bit but there was >> no stuck for around 5-10 minutes. In the meantime the same video was worst >> on nginx and stucked for 5minutes during buffer. >> >> We suspecting this to be related to something else than Disk I/O, reason >> is the same video under high load buffers better on apache(on port 8080). >> Also if it is related to high disk I/O, there must be no possibility that >> video should should stuck for 5-10 minutes. >> >> It looks to us that nginx gets halt when concurrent connections exceed >> 15K. We also tried optimizing backlog directive which slightly improved the >> performance but there must be something more related to nginx optimization >> which we must be missing. I have linked nginx.conf file, sysctl and vhost >> file to get better understanding of our tweaks. >> >> user nginx; >> worker_processes 48; >> worker_rlimit_nofile 600000; #2 filehandlers for each connection >> #error_log logs/error.log; >> #error_log logs/error.log notice; >> error_log /var/log/nginx/error.log error; >> #error_log /dev/null; >> #pid logs/nginx.pid; >> >> >> events { >> worker_connections 2048; >> use epoll; >> # use kqueue; >> } >> http { >> include mime.types; >> default_type application/octet-stream; >> # client_max_body_size 800M; >> client_body_buffer_size 128K; >> output_buffers 1 512k; >> sendfile_max_chunk 128k; >> client_header_buffer_size 256k; >> large_client_header_buffers 4 256k; >> # fastcgi_buffers 512 8k; >> # proxy_buffers 512 8k; >> # fastcgi_read_timeout 300s; >> server_tokens off; #Conceals nginx version >> access_log off; >> # access_log /var/log/nginx/access.log; >> sendfile off; >> # sendfile ; >> tcp_nodelay on; >> aio on; >> directio 512; >> # tcp_nopush on; >> client_header_timeout 120s; >> client_body_timeout 120s; >> send_timeout 120s; >> keepalive_timeout 15; >> gzip on; >> gzip_vary on; >> gzip_disable "MSIE [1-6]\."; >> gzip_proxied any; >> gzip_http_version 1.0; >> gzip_min_length 1280; >> gzip_comp_level 6; >> gzip_buffers 16 8k; >> gzip_types text/plain text/xml text/css application/x-javascript >> image/png image/x-icon image/gif image/jpeg image/jpg application/xml >> application/xml+rss text/javascr ipt application/atom+xml; >> include /usr/local/nginx/conf/vhosts/*.conf; >> # open_file_cache max=2000 inactive=20s; >> # open_file_cache_valid 60s; >> # open_file_cache_min_uses 5; >> # open_file_cache_errors off; >> >> } >> >> sysctl.conf main config : >> >> fs.file-max = 700000 >> net.core.wmem_max=6291456 >> net.core.rmem_max=6291456 >> net.ipv4.tcp_rmem= 10240 87380 6291456 >> net.ipv4.tcp_wmem= 10240 87380 6291456 >> net.ipv4.tcp_window_scaling = 1 >> net.ipv4.tcp_timestamps = 1 >> net.ipv4.tcp_sack = 1 >> net.ipv4.tcp_no_metrics_save = 1 >> net.core.netdev_max_backlog = 10000 >> >> net.ipv6.conf.all.disable_ipv6 = 1 >> net.ipv6.conf.default.disable_ipv6 = 1 >> net.ipv6.conf.lo.disable_ipv6 = 1 >> net.ipv6.conf.eth0.disable_ipv6 = 1 >> net.ipv6.conf.eth1.disable_ipv6 = 1 >> net.ipv6.conf.ppp0.disable_ipv6 = 1 >> net.ipv6.conf.tun0.disable_ipv6 = 1 >> vm.dirty_background_ratio = 50 >> vm.dirty_ratio = 80 >> net.ipv4.tcp_fin_timeout = 30 >> net.ipv4.ip_local_port_range=1024 65000 >> net.ipv4.tcp_tw_reuse = 1 >> net.netfilter.nf_conntrack_tcp_timeout_established = 54000 >> net.ipv4.netfilter.ip_conntrack_generic_timeout = 120 >> net.ipv4.tcp_syn_retries=2 >> net.ipv4.tcp_synack_retries=2 >> net.ipv4.netfilter.ip_conntrack_max = 90536 >> net.core.somaxconn = 10000 >> >> Vhost : >> >> server { >> listen 80 backlog=10000; >> server_name archive3.domain.com archive3.domain.com >> www.archive3.domain.com www.archive3.domain.com; >> access_log off; >> location / { >> root /content/archive; >> index index.html index.htm index.php; >> autoindex off; >> } >> >> location /files/thumbs/ { >> root /data/nginx/archive; >> add_header X-Cache SSD; >> expires max; >> } >> >> location ~ \.(flv)$ { >> flv; >> root /content/archive; >> # aio on; >> # directio 512; >> # output_buffers 1 2m; >> expires 7d; >> valid_referers none blocked domain.com *.domain.com *. >> facebook.com *.domain.com *.twitter.com *.domain.com *.gear3rd.net >> domain.com *.domain.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; >> if ($invalid_referer) { >> return 403; >> } >> } >> >> >> location ~ \.(mp4)$ { >> mp4; >> mp4_buffer_size 4M; >> mp4_max_buffer_size 10M; >> expires 7d; >> root /content/archive; >> valid_referers none blocked domain.com *.domain.com *. >> facebook.com *.domain.com *.twitter.com *.domain.com *.gear3rd.net >> domain.com *.domain.com tunemedia.tv www.tunemedia.tv embed.tunemedia.tv; >> if ($invalid_referer) { >> return 403; >> } >> } >> >> # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 >> location ~ \.php$ { >> root /content/archive; >> fastcgi_pass 127.0.0.1:9000; >> fastcgi_index index.php; >> fastcgi_param SCRIPT_FILENAME >> $document_root$fastcgi_script_name; >> include fastcgi_params; >> fastcgi_read_timeout 10000; >> } >> >> location ~ /\.ht { >> deny all; >> } >> >> >> location ~ ^/(status|ping)$ { >> access_log off; >> allow 127.0.0.1; >> >> deny all; >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> include fastcgi_params; >> fastcgi_pass 127.0.0.1:9000; >> } >> } >> >> Server Specs : >> >> L5630 (8cores, 16threads) >> RAM 64GB >> 12 x 3TB @ SATA Hardware Raid-6 >> >> Here's the screenshot of server load during 15K connections: >> >> http://prntscr.com/70l68q >> >> Regards. >> Shahzaib >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 7 14:17:14 2015 From: nginx-forum at nginx.us (zilog80) Date: Thu, 07 May 2015 10:17:14 -0400 Subject: long wait configtest Message-ID: <8aa56250fdcb836ecb650a6b72584077.NginxMailingListEnglish@forum.nginx.org> Hi all after several modification (implemented ocsp stapling) the command "service nginx configtest" I wait the return of configtest for circa one minute i don't understand the problem, before the command run in on second or less. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258721,258721#msg-258721 From mdounin at mdounin.ru Thu May 7 14:47:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 May 2015 17:47:38 +0300 Subject: long wait configtest In-Reply-To: <8aa56250fdcb836ecb650a6b72584077.NginxMailingListEnglish@forum.nginx.org> References: <8aa56250fdcb836ecb650a6b72584077.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150507144738.GO98215@mdounin.ru> Hello! On Thu, May 07, 2015 at 10:17:14AM -0400, zilog80 wrote: > Hi all > > after several modification (implemented ocsp stapling) the command > > "service nginx configtest" > > I wait the return of configtest for circa one minute > > i don't understand the problem, before the command run in on second or less. Likely there is a problem with name resolution. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 7 14:58:11 2015 From: nginx-forum at nginx.us (zilog80) Date: Thu, 07 May 2015 10:58:11 -0400 Subject: long wait configtest In-Reply-To: <20150507144738.GO98215@mdounin.ru> References: <20150507144738.GO98215@mdounin.ru> Message-ID: <6609e4116bee9a07d963fb817c0048cc.NginxMailingListEnglish@forum.nginx.org> Hi thanks for the answer infact I add in the main "resolver 8.8.8.8 8.8.4.4;" you means that in my config I have a names without dns resolution? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258721,258723#msg-258723 From mdounin at mdounin.ru Thu May 7 15:36:12 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 May 2015 18:36:12 +0300 Subject: long wait configtest In-Reply-To: <6609e4116bee9a07d963fb817c0048cc.NginxMailingListEnglish@forum.nginx.org> References: <20150507144738.GO98215@mdounin.ru> <6609e4116bee9a07d963fb817c0048cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150507153612.GQ98215@mdounin.ru> Hello! On Thu, May 07, 2015 at 10:58:11AM -0400, zilog80 wrote: > Hi thanks for the answer > > infact I add in the main > > "resolver 8.8.8.8 8.8.4.4;" > > you means that in my config I have a names without dns resolution? Hostnames of OCSP resolvers are resolved during configuration testing if OCSP Stapling is configured (much like all other hostnames in a configuration), and this is done using a system resolver. If your system resolver is slow this will affect configuration testing time. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 7 15:54:21 2015 From: nginx-forum at nginx.us (173279834462) Date: Thu, 07 May 2015 11:54:21 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150408152751.GI88631@mdounin.ru> References: <20150408152751.GI88631@mdounin.ru> Message-ID: <749ae1b1b8b54e9977a7fca09f6be8c9.NginxMailingListEnglish@forum.nginx.org> > Note that this isn't really indicate anything: there are two forms of OCSP requests, POST and GET. And Firefox uses POST, while nginx uses GET. Given the fact that the responder was completely broken just a few days ago - it's quite possible that it's still broken for GETs in some cases. To comply with local security policy, we disabled POST globally on all public-facing servers. This has the advantage of killing web 2.0 and all of its vulnerabilities with one simple rule, emphasis on *killing web 2.0*. Yes, the sites are read-only, and we just love it that way. For each vhost, "ssl_certificate_key" includes the vhost's private key, "ssl_certificate" includes the vhosts's public key (leaf) AND the intermediate key of the Issuer, "ssl_trusted_certificate" includes the certificate chain in full (leaf + intermediate + root CA), all in PEM format. The openssl test works as expected: vhost=""; echo Q | openssl s_client -CAfile /path/to/your/local/trust/store/ca-bundle.pem -tls1 -tlsextdebug -status -connect $vhost:443 -servername $vhost 2>&1 | less There are two problems. problem 1 ------------- nginx's "ssl_certificate" (note the singular) is truly a bundle of the certificate and the intermediate. In fact, if we remove the intermediate, we break the chain. The description for "ssl_certificate" is also misleading. "Specifies a file with the certificate in the PEM format for the given virtual server. If intermediate certificates should be specified in addition to a primary certificate, they should be specified in the same file in the following order: the primary certificate comes first, then the intermediate certificates. A secret key in the PEM format may be placed in the same file. " http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate Although the above sentence "If intermediate certificates should be specified" suggests that one may omit the intermediate certificate, in reality you can only do this if you are the CA. I do not wish to sound opinionated here, because I am making an effort to stick to the facts: if we remove the intermediate, we do break the chain and the openssl test complains loudly. Therefore, if your own facts correspond to the above, then the solution is to edit nginx's source to limit "ssl_certificate" to the leaf's public key only, and correct the description accordingly. The intermediate(s) can be bundled in a separate file. It would be easier on the eyes to re-write the keywords as well: ssl_certificate_key -----> private_certificate ssl_certificate 1/2 ------> public_certificate ssl_certificate 2/2 -------> public_intermediate_certificates ssl_trusted_certificate -> public_ca_certificate In so doing, the configuration would finally be unambiguous. problem 2 -------------- If it is true that FF uses POST to *read*, by default, then this explains the original problem with OCSP, and the fact that nginx is well configured and openssl and other browsers do work as expected. Google and other search engines show that Firefox has been affected by this OCSP problem for a long time. Perhaps they could start using GET like everybody else? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,258726#msg-258726 From mdounin at mdounin.ru Thu May 7 17:09:51 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 7 May 2015 20:09:51 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <749ae1b1b8b54e9977a7fca09f6be8c9.NginxMailingListEnglish@forum.nginx.org> References: <20150408152751.GI88631@mdounin.ru> <749ae1b1b8b54e9977a7fca09f6be8c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150507170951.GS98215@mdounin.ru> Hello! On Thu, May 07, 2015 at 11:54:21AM -0400, 173279834462 wrote: [...] > problem 1 > ------------- > > nginx's "ssl_certificate" (note the singular) is truly a bundle of the > certificate and the intermediate. > In fact, if we remove the intermediate, we break the chain. > > The description for "ssl_certificate" is also misleading. > > "Specifies a file with the certificate in the PEM format for the given > virtual server. If intermediate certificates should be specified in addition > to a primary certificate, they should be specified in the same file in the > following order: the primary certificate comes first, then the intermediate > certificates. A secret key in the PEM format may be placed in the same file. > " > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate > > Although the above sentence "If intermediate certificates should be > specified" suggests that one may omit the intermediate certificate, in > reality you can only do this if you are the CA. I do not wish to sound > opinionated here, because I am making an effort to stick to the facts: if we > remove the intermediate, we do break the chain and the openssl test > complains loudly. This depends on how your certificate is issued. If your certificate is issued directly by root CA certificate, then you don't need any extra certs here. If there are some intermediate certs, then you'll have to put them also. When this directive was introduced, almost all certificates were issued directly by roots. No in most cases intermediate certificates are additionally required. Either way, this doesn't actually change things: think of it as "SSL certificate and certificate chain" if you want some better mnemonic. > Therefore, if your own facts correspond to the above, then the solution is > to edit nginx's source to limit "ssl_certificate" to the leaf's public key > only, and correct the description accordingly. The intermediate(s) can be > bundled in a separate file. > > It would be easier on the eyes to re-write the keywords as well: > > ssl_certificate_key -----> private_certificate > ssl_certificate 1/2 ------> public_certificate > ssl_certificate 2/2 -------> public_intermediate_certificates > ssl_trusted_certificate -> public_ca_certificate > > In so doing, the configuration would finally be unambiguous. Some most obvious issues: - the "ssl_" prefix, common one for all ngx_http_ssl_module directives, is lost; - the term "private certificate" is just wrong, there is no such thing. So no, thanks, doesn't looks like an improvement for me. > problem 2 > -------------- > > If it is true that FF uses POST to *read*, by default, then this explains > the original problem with OCSP, and the fact that nginx is well configured > and openssl and other browsers do work as expected. Google and other search > engines show that Firefox has been affected by this OCSP problem for a long > time. Perhaps they could start using GET like everybody else? Unless you are CA and running your own OCSP server, you shouldn't care. If you do - you probably already know that not everybody uses GET for OCSP requests, and most notable exception is OpenSSL itself. Actually, there are more interoperability problems with GET OCSP requests than with POST ones, and that's probably why security.OCSP.GET.enabled is set to "false" by default. Also note that it's a wrong list to suggest changes to Firefox. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 7 17:56:02 2015 From: nginx-forum at nginx.us (grigory) Date: Thu, 07 May 2015 13:56:02 -0400 Subject: Static files bad loading time In-Reply-To: <20150430171511.GF29618@daoine.org> References: <20150430171511.GF29618@daoine.org> Message-ID: <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> Hi Francis, > Can you tell from nginx logs whether the slowness is due to > slow-read-from-disk, or slow-write-to-client, or something else? Could you please tell me how to check this out? My nginx logs do not contain this sort of information. > Can you find any pattern in the requests which respond more slowly than > you want? Certain browsers, certain times of day, anything like that? Unfortunately, I didn't find any pattern. It's just sometimes loads in 2 seconds and in another time -- in 10-15 seconds. I mean same 300KB image within a couple of refreshes in a browser. I've tested the problem on different browsers and different times of day -- no luck. > If you make the request from the machine itself, so network issues should > be minor, does it still show sometimes being slow? When I make request from machine itself, the image loads pretty fast. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258372,258730#msg-258730 From shahzaib.cb at gmail.com Thu May 7 18:27:44 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Thu, 7 May 2015 23:27:44 +0500 Subject: Static files bad loading time In-Reply-To: <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> References: <20150430171511.GF29618@daoine.org> <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, There are some tweaks required to nginx configurations. If the same image which usually takes second to response can takes upto 10-20 seconds to load, the wide guess would be exceeding concurrent connections at peak traffic. The directive worker_rlimit_nofile value is set much lower as compare to worker_connections. Nginx uses upto 2 file descriptors per connections, so i would suggest to increase worker_rlimit_nofile value to 124000. Also, default keepalive_timeout value is 65sec due to which your current nginx configuration is not optimized to serve more than 2000 concurrent connections. Here's how : (Worker_process)4 * 32768(worker_connections) / 65(Keepalive_timeout == 2016 connections per seconds. So i would suggest to decrease keepalive_timeout to 5sec directive and increase worker_connections to 60000. Also make sure to decrease timeout values. Regards. Shahzaib On Thu, May 7, 2015 at 10:56 PM, grigory wrote: > Hi Francis, > > > Can you tell from nginx logs whether the slowness is due to > > slow-read-from-disk, or slow-write-to-client, or something else? > > Could you please tell me how to check this out? > My nginx logs do not contain this sort of information. > > > Can you find any pattern in the requests which respond more slowly than > > you want? Certain browsers, certain times of day, anything like that? > > Unfortunately, I didn't find any pattern. It's just sometimes loads in 2 > seconds and in another time -- in 10-15 seconds. I mean same 300KB image > within a couple of refreshes in a browser. I've tested the problem on > different browsers and different times of day -- no luck. > > > If you make the request from the machine itself, so network issues should > > be minor, does it still show sometimes being slow? > > When I make request from machine itself, the image loads pretty fast. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258372,258730#msg-258730 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 7 18:28:12 2015 From: nginx-forum at nginx.us (173279834462) Date: Thu, 07 May 2015 14:28:12 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <20150507170951.GS98215@mdounin.ru> References: <20150507170951.GS98215@mdounin.ru> Message-ID: <03f2def6b3524f6b2b0de9d88e9cbafc.NginxMailingListEnglish@forum.nginx.org> > This depends on how your certificate is issued. If your certificate is issued directly by root CA certificate, then you don't need any extra certs here. If there are some intermediate certs, then you'll have to put them also. > When this directive was introduced, almost all certificates were issued directly by roots. No in most cases intermediate certificates are additionally required. Either way, this doesn't actually change things: think of it as "SSL certificate and certificate chain" if you want some better mnemonic. The fact remains that "ssl_certificate" is singular, and its description is less than clear. So, thank you for the explanation, because it completes the original description. Certificate chains are way longer than 2 (leaf + ca) nowadays. CRL checks can encompass 20+ nodes. It is for this reason, the lenght of the chain, that I still remain of the opinion that "ssl_certificate" ought to be limited to the leaf's own public certificate. The intermediates ought to be bundled on a separate file. Labels... ssl_certificate_key -----> ssl_private_certificate[...cough...]_key ssl_certificate 1/2 ------> ssl_public_certificate ssl_certificate 2/2 -------> ssl_public_intermediate_certificates ssl_trusted_certificate -> ssl_public_ca_certificate I hate the first two, and definitely prefer the original. The third could simply be "ssl_intermediates", and the fourth "ssl_ca". Whatever, I think they will stay as they are anyway. > security.OCSP.GET.enabled is set to "false" by default In my FF it set to "false" too, and flipping it does not make any difference, so my local problem is neither with GET nor with POST. It turns out that the problem is "security.ssl.enable_ocsp_stapling", which is "true" by default. If I disable it, then FF loads the web sites. If I re-enable it, then FF complains again: > Secure Connection Failed > An error occurred during a connection to madreacqua.org. > Invalid OCSP signing certificate in OCSP response. > (Error code: sec_error_ocsp_invalid_signing_cert) > > The page you are trying to view cannot be shown because the authenticity > of the received data could not be verified. > Please contact the website owners to inform them of this problem. If FF is correct, then nginx is returning a bad certificate, and we are back to square one. Is it the bundle of certificates? No, because I have verified the chain from nginx, both by hand and automatically with openssl and libressl. It is GET instead of POST again? No, it is not, because FF "fails" in both cases. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,258731#msg-258731 From vbart at nginx.com Fri May 8 11:42:06 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 08 May 2015 14:42:06 +0300 Subject: Static files bad loading time In-Reply-To: References: <20150430171511.GF29618@daoine.org> <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2197757.bzqxrpuJIj@vbart-workstation> On Thursday 07 May 2015 23:27:44 shahzaib shahzaib wrote: > Hi, > > There are some tweaks required to nginx configurations. If the same > image which usually takes second to response can takes upto 10-20 seconds > to load, the wide guess would be exceeding concurrent connections at peak > traffic. The directive worker_rlimit_nofile value is set much lower as > compare to worker_connections. Nginx uses upto 2 file descriptors per > connections, so i would suggest to increase worker_rlimit_nofile value to > 124000. > > Also, default keepalive_timeout value is 65sec due to which your current > nginx configuration is not optimized to serve more than 2000 concurrent > connections. Here's how : > > (Worker_process)4 * 32768(worker_connections) / 65(Keepalive_timeout == > 2016 connections per seconds. > > So i would suggest to decrease keepalive_timeout to 5sec directive and > increase worker_connections to 60000. > > Also make sure to decrease timeout values. > The keepalive_timeout has nothing to do with the maximum number of concurrent connections per second. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Fri May 8 12:46:38 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 8 May 2015 15:46:38 +0300 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <03f2def6b3524f6b2b0de9d88e9cbafc.NginxMailingListEnglish@forum.nginx.org> References: <20150507170951.GS98215@mdounin.ru> <03f2def6b3524f6b2b0de9d88e9cbafc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150508124638.GU98215@mdounin.ru> Hello! On Thu, May 07, 2015 at 02:28:12PM -0400, 173279834462 wrote: [...] > It turns out that the problem is "security.ssl.enable_ocsp_stapling", which > is > "true" by default. If I disable it, then FF loads the web sites. If I > re-enable it, > then FF complains again: > > > Secure Connection Failed > > An error occurred during a connection to madreacqua.org. > > Invalid OCSP signing certificate in OCSP response. > > (Error code: sec_error_ocsp_invalid_signing_cert) > > > > The page you are trying to view cannot be shown because the authenticity > > of the received data could not be verified. > > Please contact the website owners to inform them of this problem. > > If FF is correct, then nginx is returning a bad certificate, and we are back > to square one. The "Invalid OCSP signing certificate in OCSP response" likely means that an OCSP response returned by nginx is signed by an invalid certificate, at least that's what written. Unless you've forced nginx to return something invalid using the ssl_stapling_file directive, it is probably due to a behaviour of your CA. Ask your CA for more information. Trivial workaround on nginx side is to switch off ssl_stapling. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Fri May 8 13:05:14 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 08 May 2015 16:05:14 +0300 Subject: Nginx gets halt on 15K connections !! In-Reply-To: References: Message-ID: <1600040.hQ7Kp23Sd5@vbart-workstation> On Thursday 07 May 2015 18:38:23 shahzaib shahzaib wrote: > Hi, > > It looks like we made the false calculation based on entertaining > concurrent connections per seconds and worker_connections limit was set to > be very low. I've increased this limit to 16000 and issue looks to be > fixed. Here's the mechanism i used to calculate concurrent connections/sec: > > worker_processes * worker_connections / keepalive_timeout == concurrent > connections per second > > Concurrent connections on our server is around 15K . Based on this i used > the following values : > > 48 * 16000 / 15 == 51200/sec > > Can somebody point to me if the calculation method is false ? > [..] It's false. The keepalive_timeout has nothing to do with the concurrent connections per second. In fact, nginx can close an idle connection at any time when it reaches the limit of worker_connections. What's really important is the connections that nginx cannot close. The active ones. How long the connection is active depends on the request processing time. The approximate calculation looks like this: worker_processes * worker_connections * K / average $request_time where K is the average number of connections per request (for example, if you do proxy pass, then nginx needs additional connection to your backend). wbr, Valentin V. Bartenev From shahzaib.cb at gmail.com Fri May 8 13:05:51 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 8 May 2015 18:05:51 +0500 Subject: Static files bad loading time In-Reply-To: <2197757.bzqxrpuJIj@vbart-workstation> References: <20150430171511.GF29618@daoine.org> <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> <2197757.bzqxrpuJIj@vbart-workstation> Message-ID: Well, reducing keepalive_timeout and increasing the values of worker_connections resolved our issue. Following is the reference we used to tweak nginx config : http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/ Thanks. Shahzaib On Fri, May 8, 2015 at 4:42 PM, Valentin V. Bartenev wrote: > On Thursday 07 May 2015 23:27:44 shahzaib shahzaib wrote: > > Hi, > > > > There are some tweaks required to nginx configurations. If the same > > image which usually takes second to response can takes upto 10-20 seconds > > to load, the wide guess would be exceeding concurrent connections at peak > > traffic. The directive worker_rlimit_nofile value is set much lower as > > compare to worker_connections. Nginx uses upto 2 file descriptors per > > connections, so i would suggest to increase worker_rlimit_nofile value to > > 124000. > > > > Also, default keepalive_timeout value is 65sec due to which your current > > nginx configuration is not optimized to serve more than 2000 concurrent > > connections. Here's how : > > > > (Worker_process)4 * 32768(worker_connections) / 65(Keepalive_timeout == > > 2016 connections per seconds. > > > > So i would suggest to decrease keepalive_timeout to 5sec directive and > > increase worker_connections to 60000. > > > > Also make sure to decrease timeout values. > > > > The keepalive_timeout has nothing to do with the maximum number of > concurrent connections per second. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Fri May 8 13:15:59 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 8 May 2015 18:15:59 +0500 Subject: Nginx gets halt on 15K connections !! In-Reply-To: <1600040.hQ7Kp23Sd5@vbart-workstation> References: <1600040.hQ7Kp23Sd5@vbart-workstation> Message-ID: Hi Valentine, >>What's really important is the connections that nginx cannot close. The active ones. How long the connection is active depends on the request processing time. Thanks for pointing that to me. Nginx serving around 800Mb of mp4 files but the problem is we're unable to track request processing time. Could you please let us know some method/command to find how long the connection remains active during request being process ? Though, one thing is for sure, increasing worker_connections resolved our problem. Current connections setting is quite high but working well to entertain large number of connections with 900Mbps outward traffic. Here's our workers and connections settings: worker_processors 48; worker_connections 102400; Regards. Shahzaib On Fri, May 8, 2015 at 6:05 PM, Valentin V. Bartenev wrote: > On Thursday 07 May 2015 18:38:23 shahzaib shahzaib wrote: > > Hi, > > > > It looks like we made the false calculation based on entertaining > > concurrent connections per seconds and worker_connections limit was set > to > > be very low. I've increased this limit to 16000 and issue looks to be > > fixed. Here's the mechanism i used to calculate concurrent > connections/sec: > > > > worker_processes * worker_connections / keepalive_timeout == concurrent > > connections per second > > > > Concurrent connections on our server is around 15K . Based on this i used > > the following values : > > > > 48 * 16000 / 15 == 51200/sec > > > > Can somebody point to me if the calculation method is false ? > > > [..] > > It's false. > > The keepalive_timeout has nothing to do with the concurrent connections > per second. > In fact, nginx can close an idle connection at any time when it reaches > the limit > of worker_connections. > > What's really important is the connections that nginx cannot close. The > active ones. > How long the connection is active depends on the request processing time. > > The approximate calculation looks like this: > > worker_processes * worker_connections * K / average $request_time > > where K is the average number of connections per request (for example, if > you do proxy > pass, then nginx needs additional connection to your backend). > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri May 8 13:18:26 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 08 May 2015 16:18:26 +0300 Subject: Static files bad loading time In-Reply-To: References: <20150430171511.GF29618@daoine.org> <2197757.bzqxrpuJIj@vbart-workstation> Message-ID: <1515622.ZzX1F9tFDS@vbart-workstation> On Friday 08 May 2015 18:05:51 shahzaib shahzaib wrote: > Well, reducing keepalive_timeout and increasing the values of > worker_connections resolved our issue. Following is the reference we used > to tweak nginx config : > > http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/ > This reference is quite inaccurate. Don't trust arbitrary articles in the internet. See here for the detailed explanation: http://mailman.nginx.org/pipermail/nginx/2015-May/047460.html wbr, Valentin V. Bartenev From vbart at nginx.com Fri May 8 13:26:35 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 08 May 2015 16:26:35 +0300 Subject: Nginx gets halt on 15K connections !! In-Reply-To: References: <1600040.hQ7Kp23Sd5@vbart-workstation> Message-ID: <3277650.CPREVc6VvD@vbart-workstation> On Friday 08 May 2015 18:15:59 shahzaib shahzaib wrote: > Hi Valentine, > > > >>What's really important is the connections that nginx cannot close. The > >>active ones. > >>How long the connection is active depends on the request processing time. > > > Thanks for pointing that to me. Nginx serving around 800Mb of mp4 files > but the problem is we're unable to track request processing time. Could you > please let us know some method/command to find how long the connection > remains active during request being process ? You can add the $request_time variable to your access logs: http://nginx.org/r/$request_time And then analyze the logs. > > Though, one thing is for sure, increasing worker_connections resolved our > problem. Current connections setting is quite high but working well to > entertain large number of connections with 900Mbps outward traffic. Here's > our workers and connections settings: > > > worker_processors 48; > worker_connections 102400; > Actually I would recommend you to buy the professional support: http://nginx.com/support/ I regularly see you trying to solve complicated problems in this mailing list, so my advise to would be to buy the professional support: wbr, Valentin V. Bartenev From shahzaib.cb at gmail.com Fri May 8 14:44:11 2015 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 8 May 2015 19:44:11 +0500 Subject: Static files bad loading time In-Reply-To: <1515622.ZzX1F9tFDS@vbart-workstation> References: <20150430171511.GF29618@daoine.org> <2197757.bzqxrpuJIj@vbart-workstation> <1515622.ZzX1F9tFDS@vbart-workstation> Message-ID: Right, thanks. Btw, we used another nginx official doc for optimization and the most effective optimization parameter was tweaking the backlog from default 512 to 4096 in nginx listen directive. http://nginx.com/blog/tuning-nginx/ Regards. Shahzaib On Fri, May 8, 2015 at 6:18 PM, Valentin V. Bartenev wrote: > On Friday 08 May 2015 18:05:51 shahzaib shahzaib wrote: > > Well, reducing keepalive_timeout and increasing the values of > > worker_connections resolved our issue. Following is the reference we used > > to tweak nginx config : > > > > > http://blog.martinfjordvald.com/2011/04/optimizing-nginx-for-high-traffic-loads/ > > > > This reference is quite inaccurate. Don't trust arbitrary articles in the > internet. > > See here for the detailed explanation: > http://mailman.nginx.org/pipermail/nginx/2015-May/047460.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 8 14:49:51 2015 From: nginx-forum at nginx.us (DrMickeyLauer) Date: Fri, 08 May 2015 10:49:51 -0400 Subject: Intermittent SSL Handshake Errors In-Reply-To: References: <3841706c7d09ea19e6b3baeb9391b66f.NginxMailingListEnglish@forum.nginx.org> <7757d25fc59ff89f5e7f6d46f9f29261.NginxMailingListEnglish@forum.nginx.org> Message-ID: <567473f84c6632580823ddaa34cb026b.NginxMailingListEnglish@forum.nginx.org> First off, thanks to all who contributed to this thread. I must admit I did not understand much of it, however as someone plagued by this bug (we have a bunch of cherrypy REST servers talking to iOS and Android clients and have seen a lot of those fallback errors), I must admit I'm a bit of a loss on how to proceed here with regards to the future. Yes, I have downgraded my libssl to deb7u12, however I wonder if the openssl team or debian or anyone capable of fixing this issue for good in future openssl releases is aware of what we found here. How to proceed? Especially in light of a new debian release (not sure whether I can downgrade to deb7u12 on jessie...). Best regards, Michael Lauer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256373,258752#msg-258752 From nginx-forum at nginx.us Fri May 8 16:01:47 2015 From: nginx-forum at nginx.us (zilog80) Date: Fri, 08 May 2015 12:01:47 -0400 Subject: long wait configtest In-Reply-To: <20150507153612.GQ98215@mdounin.ru> References: <20150507153612.GQ98215@mdounin.ru> Message-ID: <86856fa38d71b28edd982963cd8375ca.NginxMailingListEnglish@forum.nginx.org> can i bypass the resolver test in configtest? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258721,258753#msg-258753 From vbart at nginx.com Fri May 8 16:06:29 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 08 May 2015 19:06:29 +0300 Subject: long wait configtest In-Reply-To: <86856fa38d71b28edd982963cd8375ca.NginxMailingListEnglish@forum.nginx.org> References: <20150507153612.GQ98215@mdounin.ru> <86856fa38d71b28edd982963cd8375ca.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6597777.HjAC5QEC5v@vbart-workstation> On Friday 08 May 2015 12:01:47 zilog80 wrote: > can i bypass the resolver test in configtest? > No. wbr, Valentin V. Bartenev From bm_witness at yahoo.com Fri May 8 20:09:28 2015 From: bm_witness at yahoo.com (BRM) Date: Fri, 8 May 2015 20:09:28 +0000 (UTC) Subject: Problems with nginx reverse proxy... Message-ID: <1230075692.3647623.1431115768048.JavaMail.yahoo@mail.yahoo.com> I have a python application hosted under gunicorn that is expected to have long lived requests.It works just fine if we hit the application directly; however, we want to have nginx in front of it and are having issues with 504 errors. I found some settings on-line that seem to be what we want to change, specifically the proxy_*_timeouts and send_timeout, and applied them to an environment. However, no matter what I try it always fails at 60 seconds.Here's the settings I am presently trying: ???????# Force HTTP/1.1 over proxy and its Keep-Alive functionality ???????proxy_http_version 1.1; ???????proxy_set_header Connection ""; ???????# Disable request buffering so requests go immediately to server ???????proxy_request_buffering off; ???????# Timeouts ???????# Connect, Send Normal Timeout (60 seconds) OK ???????# Proxy Back-end Timeout (10m default) probably needs to be long ???????# Read/Client Send Timeout (60 second default) must be very long ???????proxy_timeout 3600; ???????proxy_connect_timeout 1800; ???????proxy_read_timeout 1800; ???????proxy_send_timeout 1800; ???????send_timeout 1800; These are all applied on the location second. I've tried these setting under both the location section and the server section, but there doesn't seem to be any difference. What am I missing? TIA, Ben -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 8 21:27:24 2015 From: nginx-forum at nginx.us (TPFreak) Date: Fri, 08 May 2015 17:27:24 -0400 Subject: Fwd: Re: comparing two variables In-Reply-To: <4DF6D383.4000403@ohlste.in> References: <4DF6D383.4000403@ohlste.in> Message-ID: same issue, need to know how! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,206345,258760#msg-258760 From dennisml at conversis.de Sat May 9 17:37:20 2015 From: dennisml at conversis.de (Dennis Jacobfeuerborn) Date: Sat, 09 May 2015 19:37:20 +0200 Subject: static file performance "staircase" pattern Message-ID: <554E45D0.90106@conversis.de> Hi, I'm trying to find out how to effectively deliver pages with lots of images on a page. Attached you see opening a static html page that contains lots of img tags pointing to static images. Please also note that all images are cached in the browser (hence the 304 response) so no actual data needs to be downloaded. All of this is happening on a CentOS 7 system using nginx 1.6. The question I have is why is it that the responses get increasingly longer? There is nothing else happening on that server and I also tried various optimizations like keepalive, multi_accept, epoll, open_file_cache, etc. but nothing seems to get rid of that "staircase" pattern in the image. Does anybody have an idea what the cause is for this behavior and how to improve it? Regards, Dennis -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx-static.png Type: image/png Size: 164212 bytes Desc: not available URL: From paul.j.smith0 at gmail.com Sat May 9 18:03:59 2015 From: paul.j.smith0 at gmail.com (Paul Smith) Date: Sat, 9 May 2015 12:03:59 -0600 Subject: static file performance "staircase" pattern In-Reply-To: <554E45D0.90106@conversis.de> References: <554E45D0.90106@conversis.de> Message-ID: On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn wrote: > Hi, > I'm trying to find out how to effectively deliver pages with lots of > images on a page. Attached you see opening a static html page that > contains lots of img tags pointing to static images. Please also note > that all images are cached in the browser (hence the 304 response) so no > actual data needs to be downloaded. > All of this is happening on a CentOS 7 system using nginx 1.6. > > The question I have is why is it that the responses get increasingly > longer? There is nothing else happening on that server and I also tried > various optimizations like keepalive, multi_accept, epoll, > open_file_cache, etc. but nothing seems to get rid of that "staircase" > pattern in the image. > > Does anybody have an idea what the cause is for this behavior and how to > improve it? > > Regards, > Dennis > I am not an expert but I believe that most browsers only make between 4 to 6 simultaneous connections to a domain. So the first round of requests are sent and the response received and then the second round go out and are received back and so forth. Doing a search for something like "max downloads per domain" may bring you better information. Paul From lucas at slcoding.com Sat May 9 18:24:00 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Sat, 09 May 2015 20:24:00 +0200 Subject: static file performance "staircase" pattern In-Reply-To: References: <554E45D0.90106@conversis.de> Message-ID: <554E50C0.3090600@slcoding.com> What you should do, to increase the concurrent amount of requests, is to use domain-sharding, since as Paul mentioned, browsers have between 4 and 8 (actually) simultaneous connections per domain, meaning if you introduce static1,2,3.domain.com, you will increase your concurrency. But at same time you also need to be aware, that this can have a negative effect on your performance if you put too many domains, there's no golden rule on how many you need, it's all a site by site case, and it differs. Also take into account your end-users connection can be limiting things heavily as well if you put too much concurrency (thus negative effect) - if you have a high number of concurrent requests being processed it will slow down the download time of each, meaning the perceived performance that the user see might get worse because it feels like the page is slower. - Lucas > Paul Smith > 9 May 2015 20:03 > On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn > > I am not an expert but I believe that most browsers only make between > 4 to 6 simultaneous connections to a domain. So the first round of > requests are sent and the response received and then the second round > go out and are received back and so forth. Doing a search for > something like "max downloads per domain" may bring you better > information. > > Paul > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Dennis Jacobfeuerborn > 9 May 2015 19:37 > Hi, > I'm trying to find out how to effectively deliver pages with lots of > images on a page. Attached you see opening a static html page that > contains lots of img tags pointing to static images. Please also note > that all images are cached in the browser (hence the 304 response) so no > actual data needs to be downloaded. > All of this is happening on a CentOS 7 system using nginx 1.6. > > The question I have is why is it that the responses get increasingly > longer? There is nothing else happening on that server and I also tried > various optimizations like keepalive, multi_accept, epoll, > open_file_cache, etc. but nothing seems to get rid of that "staircase" > pattern in the image. > > Does anybody have an idea what the cause is for this behavior and how to > improve it? > > Regards, > Dennis > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: From matrutherford at gmail.com Sun May 10 06:46:58 2015 From: matrutherford at gmail.com (Matt Rutherford) Date: Sun, 10 May 2015 07:46:58 +0100 Subject: nginx conf 502 on reverse proxy location Message-ID: my http conf is at http://p.ngx.cc/2018 currently getting me a 502 when I hit my_server.com/oauth/ Any idea what I'm doing wrong? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun May 10 07:01:04 2015 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 10 May 2015 03:01:04 -0400 Subject: nginx conf 502 on reverse proxy location In-Reply-To: References: Message-ID: <03e36475552545c649faa30f6e33b5c5.NginxMailingListEnglish@forum.nginx.org> Try location /oauth { Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258776,258778#msg-258778 From reallfqq-nginx at yahoo.fr Sun May 10 14:17:20 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 10 May 2015 16:17:20 +0200 Subject: nginx conf 502 on reverse proxy location In-Reply-To: <03e36475552545c649faa30f6e33b5c5.NginxMailingListEnglish@forum.nginx.org> References: <03e36475552545c649faa30f6e33b5c5.NginxMailingListEnglish@forum.nginx.org> Message-ID: Your configuration appears very strange at first glance: you are using some proxy_* directives while issuing a uwsgi_pass. Try cleaning it up by removing extra directive on first hand and try to make it workd with the most minimalist configuration possible. Anyhow, check your backend listens on the 127.0.0.1 interface, and on port 8001. You can set up a tcpdump utility making the gateway between front and back ends to check the connection proceeds the way you expect it does. --- *B. R.* On Sun, May 10, 2015 at 9:01 AM, itpp2012 wrote: > Try > location /oauth { > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258776,258778#msg-258778 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sjums07 at gmail.com Mon May 11 06:49:41 2015 From: sjums07 at gmail.com (Nikolaj Schomacker) Date: Mon, 11 May 2015 06:49:41 +0000 Subject: static file performance "staircase" pattern In-Reply-To: <554E50C0.3090600@slcoding.com> References: <554E45D0.90106@conversis.de> <554E50C0.3090600@slcoding.com> Message-ID: And a last thing you should be aware of if it applies to your case is SEO. Using multiple domains for images is perfectly fine in the eyes of Google, but be sure the same images is always served from the same subdomain. Also be sure to have all of the subdomains added to the same webmasters account as your main site. ~ Nikolaj On Sat, May 9, 2015 at 8:24 PM Lucas Rolff wrote: > What you should do, to increase the concurrent amount of requests, is to > use domain-sharding, since as Paul mentioned, browsers have between 4 and 8 > (actually) simultaneous connections per domain, meaning if you introduce > static1,2,3.domain.com, you will increase your concurrency. > > But at same time you also need to be aware, that this can have a negative > effect on your performance if you put too many domains, there's no golden > rule on how many you need, it's all a site by site case, and it differs. > Also take into account your end-users connection can be limiting things > heavily as well if you put too much concurrency (thus negative effect) - if > you have a high number of concurrent requests being processed it will slow > down the download time of each, meaning the perceived performance that the > user see might get worse because it feels like the page is slower. > > - Lucas > > Paul Smith > 9 May 2015 20:03 > > On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn > > I am not an expert but I believe that most browsers only make between > 4 to 6 simultaneous connections to a domain. So the first round of > requests are sent and the response received and then the second round > go out and are received back and so forth. Doing a search for > something like "max downloads per domain" may bring you better > information. > > Paul > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > Dennis Jacobfeuerborn > 9 May 2015 19:37 > > Hi, > I'm trying to find out how to effectively deliver pages with lots of > images on a page. Attached you see opening a static html page that > contains lots of img tags pointing to static images. Please also note > that all images are cached in the browser (hence the 304 response) so no > actual data needs to be downloaded. > All of this is happening on a CentOS 7 system using nginx 1.6. > > The question I have is why is it that the responses get increasingly > longer? There is nothing else happening on that server and I also tried > various optimizations like keepalive, multi_accept, epoll, > open_file_cache, etc. but nothing seems to get rid of that "staircase" > pattern in the image. > > Does anybody have an idea what the cause is for this behavior and how to > improve it? > > Regards, > Dennis > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: From lucas at slcoding.com Mon May 11 07:39:56 2015 From: lucas at slcoding.com (Lucas Rolff) Date: Mon, 11 May 2015 09:39:56 +0200 Subject: static file performance "staircase" pattern In-Reply-To: References: <554E45D0.90106@conversis.de> <554E50C0.3090600@slcoding.com> Message-ID: <55505CCC.1090902@slcoding.com> It's not really required to serve it from the same sub-domain always. The most optimal solution would be to add the canonical link header when serving using domain sharding. But from a caching perspective, keeping the sharding consistent is indeed beneficial (you can use crc32 on the image name e.g. this will always return the same hash, and based on this do the domain sharding), but from a SEO perspective, it doesn't matter if you just do it right with canonical link. > Nikolaj Schomacker > 11 May 2015 08:49 > And a last thing you should be aware of if it applies to your case is > SEO. Using multiple domains for images is perfectly fine in the eyes > of Google, but be sure the same images is always served from the same > subdomain. Also be sure to have all of the subdomains added to the > same webmasters account as your main site. > > ~ Nikolaj > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Lucas Rolff > 9 May 2015 20:24 > What you should do, to increase the concurrent amount of requests, is > to use domain-sharding, since as Paul mentioned, browsers have between > 4 and 8 (actually) simultaneous connections per domain, meaning if you > introduce static1,2,3.domain.com, you will increase your concurrency. > > But at same time you also need to be aware, that this can have a > negative effect on your performance if you put too many domains, > there's no golden rule on how many you need, it's all a site by site > case, and it differs. > Also take into account your end-users connection can be limiting > things heavily as well if you put too much concurrency (thus negative > effect) - if you have a high number of concurrent requests being > processed it will slow down the download time of each, meaning the > perceived performance that the user see might get worse because it > feels like the page is slower. > > - Lucas > > Paul Smith > 9 May 2015 20:03 > On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn > > I am not an expert but I believe that most browsers only make between > 4 to 6 simultaneous connections to a domain. So the first round of > requests are sent and the response received and then the second round > go out and are received back and so forth. Doing a search for > something like "max downloads per domain" may bring you better > information. > > Paul > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Dennis Jacobfeuerborn > 9 May 2015 19:37 > Hi, > I'm trying to find out how to effectively deliver pages with lots of > images on a page. Attached you see opening a static html page that > contains lots of img tags pointing to static images. Please also note > that all images are cached in the browser (hence the 304 response) so no > actual data needs to be downloaded. > All of this is happening on a CentOS 7 system using nginx 1.6. > > The question I have is why is it that the responses get increasingly > longer? There is nothing else happening on that server and I also tried > various optimizations like keepalive, multi_accept, epoll, > open_file_cache, etc. but nothing seems to get rid of that "staircase" > pattern in the image. > > Does anybody have an idea what the cause is for this behavior and how to > improve it? > > Regards, > Dennis > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1295 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: From sjums07 at gmail.com Mon May 11 08:58:17 2015 From: sjums07 at gmail.com (Nikolaj Schomacker) Date: Mon, 11 May 2015 08:58:17 +0000 Subject: static file performance "staircase" pattern In-Reply-To: <55505CCC.1090902@slcoding.com> References: <554E45D0.90106@conversis.de> <554E50C0.3090600@slcoding.com> <55505CCC.1090902@slcoding.com> Message-ID: It's not a requirement in any way and your SEO might turn out just fine using different subdomains. My suggestion is not just made up from my imagination, but advice from a Google employee since this have been a real problem for us. By serving the same image from multiple subdomains, from the same page the image can end up in a kind of "limbo" where google can't decide which image to use. Also, as I understand it, canonical headers are currently only supported for web search and not image search from reading this doc https://support.google.com/webmasters/answer/139066?hl=en . Again, this might not even be relevant for your case, Dennis :) ~ Nikolaj On Mon, May 11, 2015 at 9:40 AM Lucas Rolff wrote: > It's not really required to serve it from the same sub-domain always. > The most optimal solution would be to add the canonical link header when > serving using domain sharding. > > But from a caching perspective, keeping the sharding consistent is indeed > beneficial (you can use crc32 on the image name e.g. this will always > return the same hash, and based on this do the domain sharding), but from a > SEO perspective, it doesn't matter if you just do it right with canonical > link. > > Nikolaj Schomacker > 11 May 2015 08:49 > > And a last thing you should be aware of if it applies to your case is SEO. > Using multiple domains for images is perfectly fine in the eyes of Google, > but be sure the same images is always served from the same subdomain. Also > be sure to have all of the subdomains added to the same webmasters account > as your main site. > > ~ Nikolaj > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > Lucas Rolff > 9 May 2015 20:24 > > What you should do, to increase the concurrent amount of requests, is to > use domain-sharding, since as Paul mentioned, browsers have between 4 and 8 > (actually) simultaneous connections per domain, meaning if you introduce > static1,2,3.domain.com, you will increase your concurrency. > > But at same time you also need to be aware, that this can have a negative > effect on your performance if you put too many domains, there's no golden > rule on how many you need, it's all a site by site case, and it differs. > Also take into account your end-users connection can be limiting things > heavily as well if you put too much concurrency (thus negative effect) - if > you have a high number of concurrent requests being processed it will slow > down the download time of each, meaning the perceived performance that the > user see might get worse because it feels like the page is slower. > > - Lucas > > Paul Smith > 9 May 2015 20:03 > On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn > > I am not an expert but I believe that most browsers only make between > 4 to 6 simultaneous connections to a domain. So the first round of > requests are sent and the response received and then the second round > go out and are received back and so forth. Doing a search for > something like "max downloads per domain" may bring you better > information. > > Paul > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Dennis Jacobfeuerborn > 9 May 2015 19:37 > Hi, > I'm trying to find out how to effectively deliver pages with lots of > images on a page. Attached you see opening a static html page that > contains lots of img tags pointing to static images. Please also note > that all images are cached in the browser (hence the 304 response) so no > actual data needs to be downloaded. > All of this is happening on a CentOS 7 system using nginx 1.6. > > The question I have is why is it that the responses get increasingly > longer? There is nothing else happening on that server and I also tried > various optimizations like keepalive, multi_accept, epoll, > open_file_cache, etc. but nothing seems to get rid of that "staircase" > pattern in the image. > > Does anybody have an idea what the cause is for this behavior and how to > improve it? > > Regards, > Dennis > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1295 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1295 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: From reallfqq-nginx at yahoo.fr Mon May 11 10:45:54 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 11 May 2015 12:45:54 +0200 Subject: static file performance "staircase" pattern In-Reply-To: References: <554E45D0.90106@conversis.de> <554E50C0.3090600@slcoding.com> <55505CCC.1090902@slcoding.com> Message-ID: Content should be accessible from one (and one only) location at any single time, so be careful to content overlap between subdomains. Not even talking about SEO, it is just pragmatically logical: no single URI should be served through different URL if you want your repository to be seen as 'clean' IMHO. I like octopuses, as long as the object of the study purely remains the animal form. >From SEO prospective, the usual main problem with subdomains is score dilution: content served from subdomains do not benefit from ranking score from other domains (including parent). That is however valid for pages, not resources (I suppose?). Anyhow, this 'problem' (optimization? constraint? decision?) lies client-side, not server-side, so this thread on the nginx ML is maybe not the best location to debate it. --- *B. R.* On Mon, May 11, 2015 at 10:58 AM, Nikolaj Schomacker wrote: > It's not a requirement in any way and your SEO might turn out just fine > using different subdomains. My suggestion is not just made up from my > imagination, but advice from a Google employee since this have been a real > problem for us. By serving the same image from multiple subdomains, from > the same page the image can end up in a kind of "limbo" where google can't > decide which image to use. > > Also, as I understand it, canonical headers are currently only supported > for web search and not image search from reading this doc > https://support.google.com/webmasters/answer/139066?hl=en . > > Again, this might not even be relevant for your case, Dennis :) > > ~ Nikolaj > > On Mon, May 11, 2015 at 9:40 AM Lucas Rolff wrote: > >> It's not really required to serve it from the same sub-domain always. >> The most optimal solution would be to add the canonical link header when >> serving using domain sharding. >> >> But from a caching perspective, keeping the sharding consistent is indeed >> beneficial (you can use crc32 on the image name e.g. this will always >> return the same hash, and based on this do the domain sharding), but from a >> SEO perspective, it doesn't matter if you just do it right with canonical >> link. >> >> Nikolaj Schomacker >> 11 May 2015 08:49 >> >> And a last thing you should be aware of if it applies to your case is >> SEO. Using multiple domains for images is perfectly fine in the eyes of >> Google, but be sure the same images is always served from the same >> subdomain. Also be sure to have all of the subdomains added to the same >> webmasters account as your main site. >> >> ~ Nikolaj >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> Lucas Rolff >> 9 May 2015 20:24 >> >> What you should do, to increase the concurrent amount of requests, is to >> use domain-sharding, since as Paul mentioned, browsers have between 4 and 8 >> (actually) simultaneous connections per domain, meaning if you introduce >> static1,2,3.domain.com, you will increase your concurrency. >> >> But at same time you also need to be aware, that this can have a negative >> effect on your performance if you put too many domains, there's no golden >> rule on how many you need, it's all a site by site case, and it differs. >> Also take into account your end-users connection can be limiting things >> heavily as well if you put too much concurrency (thus negative effect) - if >> you have a high number of concurrent requests being processed it will slow >> down the download time of each, meaning the perceived performance that the >> user see might get worse because it feels like the page is slower. >> >> - Lucas >> >> Paul Smith >> 9 May 2015 20:03 >> On Sat, May 9, 2015 at 11:37 AM, Dennis Jacobfeuerborn >> >> I am not an expert but I believe that most browsers only make between >> 4 to 6 simultaneous connections to a domain. So the first round of >> requests are sent and the response received and then the second round >> go out and are received back and so forth. Doing a search for >> something like "max downloads per domain" may bring you better >> information. >> >> Paul >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> Dennis Jacobfeuerborn >> 9 May 2015 19:37 >> Hi, >> I'm trying to find out how to effectively deliver pages with lots of >> images on a page. Attached you see opening a static html page that >> contains lots of img tags pointing to static images. Please also note >> that all images are cached in the browser (hence the 304 response) so no >> actual data needs to be downloaded. >> All of this is happening on a CentOS 7 system using nginx 1.6. >> >> The question I have is why is it that the responses get increasingly >> longer? There is nothing else happening on that server and I also tried >> various optimizations like keepalive, multi_accept, epoll, >> open_file_cache, etc. but nothing seems to get rid of that "staircase" >> pattern in the image. >> >> Does anybody have an idea what the cause is for this behavior and how to >> improve it? >> >> Regards, >> Dennis >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1405 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: postbox-contact.jpg Type: image/jpeg Size: 1295 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: compose-unknown-contact.jpg Type: image/jpeg Size: 770 bytes Desc: not available URL: From nginx-forum at nginx.us Mon May 11 11:11:34 2015 From: nginx-forum at nginx.us (braindeaf) Date: Mon, 11 May 2015 07:11:34 -0400 Subject: Wildcard SSL and Wildcard hostnames Message-ID: Hey there, I'm struggling to find the correct answer and unsure if there even is one. We have a domain say, example.co and we've purchased a wildcard SSL certificate for it. We want to be able to provide what amounts to....with minimal configuration. https://example.co https://blah.example.co https://somerandomsubdomain.example.co all pointing at the same server so something like server { port 443 server_name example.co *.example.co; ssl on; ssl_protocols .....; ssl_ciphers .....; ssl_prefer_server_ciphers on; ssl_certificate /data/nginx/ssl/example.co.crt; ssl_certificate_key /data/nginx/ssl/example.co.key; } This doesn't appear to work as I would expect it to. Would we need to set up a different server for each subdomain explicity. or could we get away with one config for example.co and another for *.example.co? I've seen examples of using the same ssl key for different virtual servers with different hostnames but not pointing to the same one. Anyone else have any joy with a similar config? -- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258792,258792#msg-258792 From reallfqq-nginx at yahoo.fr Mon May 11 11:32:13 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 11 May 2015 13:32:13 +0200 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: References: Message-ID: What did you expect? What did you get? What did you think you configured? --- *B. R.* On Mon, May 11, 2015 at 1:11 PM, braindeaf wrote: > Hey there, I'm struggling to find the correct answer and unsure if there > even is one. > > We have a domain say, example.co and we've purchased a wildcard SSL > certificate for it. We want to be able to provide what amounts to....with > minimal configuration. > > https://example.co > https://blah.example.co > https://somerandomsubdomain.example.co > > all pointing at the same server so something like > > server { > port 443 > server_name example.co *.example.co; > > ssl on; > ssl_protocols .....; > ssl_ciphers .....; > ssl_prefer_server_ciphers on; > ssl_certificate /data/nginx/ssl/example.co.crt; > ssl_certificate_key /data/nginx/ssl/example.co.key; > } > > This doesn't appear to work as I would expect it to. Would we need to set > up > a different server for each subdomain explicity. or could we get away with > one config for example.co and another for *.example.co? I've seen examples > of using the same ssl key for different virtual servers with different > hostnames but not pointing to the same one. > > Anyone else have any joy with a similar config? > > -- > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258792,258792#msg-258792 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Mon May 11 11:37:27 2015 From: r at roze.lv (Reinis Rozitis) Date: Mon, 11 May 2015 14:37:27 +0300 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: References: Message-ID: <2249588665C546B5862A539607548807@MasterPC> > This doesn't appear to work as I would expect it to. Would we need to set > up a different server for each subdomain explicity. or could we get away with one config for example.co and another for *.example.co? Doesn't work in what way? (Does nginx or browser complain/what's the error?) Such configuration is perfectly fine, unless you allready have a server {} block for each subdomain then you need to repeat the ssl config for each one. The other caveat I can think of would be if the wildcard *.example.co certificate doesn't contain Subject Alternate Name for 'example.co' (exact domain without prefix). It depends on CA who issued the certificate - usually they include the bare domain too but I have seen also different cases. rr From reallfqq-nginx at yahoo.fr Mon May 11 12:15:09 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 11 May 2015 14:15:09 +0200 Subject: Official packages v1.8.0 do NOT include the GeoIP module Message-ID: Hello, We are facing quite some trouble with the official nginx packages: their nginx -V does not show any sign of the GeoIP module. Confirmed for: - Debian package - CentOS 6 package As I have not read any deprecation message anywhere, and since its presence is confirmed in earlier versions, why is it that way? Mistake? ?I was unable to report a bug in http://trac.nginx.org? as I used to connect to it through my Google account: 'OpenID 2.0 for Google Accounts has gone away' --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 11 13:51:29 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 11 May 2015 09:51:29 -0400 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: References: Message-ID: braindeaf Wrote: ------------------------------------------------------- > server { > port 443 > server_name .example.co; Would be a catch all. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258792,258796#msg-258796 From nginx-forum at nginx.us Mon May 11 13:54:31 2015 From: nginx-forum at nginx.us (braindeaf) Date: Mon, 11 May 2015 09:54:31 -0400 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <2249588665C546B5862A539607548807@MasterPC> References: <2249588665C546B5862A539607548807@MasterPC> Message-ID: <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> Sorry to be vague. http://example.co - works fine and as expected. http://blah.example.co - returns curl: (60) SSL certificate problem: Invalid certificate chain This is actually picking up the SSL cert for the default site on the server. So the server_name is picking up example.co but *.example.co seems to be ignored. Interesting, the wildcard SSL Key is the most basic RapidSSL Wildcard Certificate, so perhaps going down the Subject Alternate Name route might be worthwhile or worth talking to RapidSSL Support about because we also need *.staging.example.co to work for our staging environment too which might kill two birds with one stone. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258792,258797#msg-258797 From reallfqq-nginx at yahoo.fr Mon May 11 13:57:59 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 11 May 2015 15:57:59 +0200 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> References: <2249588665C546B5862A539607548807@MasterPC> <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> Message-ID: itpp2012 provided you with the answer, also to be found in the server_name directive documentation. --- *B. R.* On Mon, May 11, 2015 at 3:54 PM, braindeaf wrote: > Sorry to be vague. > > http://example.co - works fine and as expected. > http://blah.example.co - returns curl: (60) SSL certificate problem: > Invalid > certificate chain > > This is actually picking up the SSL cert for the default site on the > server. > So the server_name is picking up example.co but *.example.co seems to be > ignored. > > Interesting, the wildcard SSL Key is the most basic RapidSSL Wildcard > Certificate, so perhaps going down the Subject Alternate Name route might > be worthwhile or worth talking to RapidSSL Support about because we also > need *.staging.example.co to work for our staging environment too which > might kill two birds with one stone. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258792,258797#msg-258797 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 11 14:15:48 2015 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 11 May 2015 10:15:48 -0400 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> References: <2249588665C546B5862A539607548807@MasterPC> <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8d9cfc7a7c7fd4ba95f1a7f6ad04e5c8.NginxMailingListEnglish@forum.nginx.org> braindeaf Wrote: ------------------------------------------------------- > http://blah.example.co - returns curl: (60) SSL certificate problem: > Invalid certificate chain Forget one thing, you also need a wildcard DNS entry. DNS: so it arrives at your frontdoor Nginx.conf (server_name .example.co;): so you can deal with it Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258792,258799#msg-258799 From nginx-forum at nginx.us Mon May 11 14:26:10 2015 From: nginx-forum at nginx.us (braindeaf) Date: Mon, 11 May 2015 10:26:10 -0400 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <8d9cfc7a7c7fd4ba95f1a7f6ad04e5c8.NginxMailingListEnglish@forum.nginx.org> References: <2249588665C546B5862A539607548807@MasterPC> <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> <8d9cfc7a7c7fd4ba95f1a7f6ad04e5c8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thanks for the tip. I have replaced the config with server_name example.co *.example.co with server_name .example.co While that is definitely more concise it didn't solve the problem. http://example.co - seems fine. http://test.example.co - curl: (51) SSL peer certificate or SSH remote key was not OK However, this is definitely a different error message so it doesn't appear to be falling back on the default SSL certificate for the server, which is a step forward. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258792,258800#msg-258800 From nginx-forum at nginx.us Mon May 11 14:31:05 2015 From: nginx-forum at nginx.us (bughunter) Date: Mon, 11 May 2015 10:31:05 -0400 Subject: How to enable OCSP stapling when default server is self-signed? In-Reply-To: <749ae1b1b8b54e9977a7fca09f6be8c9.NginxMailingListEnglish@forum.nginx.org> References: <20150408152751.GI88631@mdounin.ru> <749ae1b1b8b54e9977a7fca09f6be8c9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <01dc7ab86b5b22e6d4e877e0278df81d.NginxMailingListEnglish@forum.nginx.org> 173279834462 Wrote: ------------------------------------------------------- > > Note that this isn't really indicate anything: there are two forms > of OCSP requests, POST and GET. And Firefox uses POST, while nginx > uses GET. Given the fact that the responder was completely broken just > a few days ago - it's quite possible that it's still broken for GETs > in some cases. > > To comply with local security policy, we disabled POST globally on all > public-facing servers. > This has the advantage of killing web 2.0 and all of its > vulnerabilities with one simple rule, emphasis on *killing web 2.0*. > Yes, the sites are read-only, and we just love it that way. > > For each vhost, > "ssl_certificate_key" includes the vhost's private key, > "ssl_certificate" includes the vhosts's public key (leaf) AND the > intermediate key of the Issuer, > "ssl_trusted_certificate" includes the certificate chain in full (leaf > + intermediate + root CA), > all in PEM format. > > The openssl test works as expected: > > vhost=""; echo Q | openssl s_client -CAfile > /path/to/your/local/trust/store/ca-bundle.pem -tls1 -tlsextdebug > -status -connect $vhost:443 -servername $vhost 2>&1 | less > > There are two problems. > > problem 1 > ------------- > > nginx's "ssl_certificate" (note the singular) is truly a bundle of the > certificate and the intermediate. > In fact, if we remove the intermediate, we break the chain. > > The description for "ssl_certificate" is also misleading. > > "Specifies a file with the certificate in the PEM format for the given > virtual server. If intermediate certificates should be specified in > addition to a primary certificate, they should be specified in the > same file in the following order: the primary certificate comes first, > then the intermediate certificates. A secret key in the PEM format may > be placed in the same file. " > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_certificate > > > Although the above sentence "If intermediate certificates should be > specified" suggests that one may omit the intermediate certificate, in > reality you can only do this if you are the CA. I do not wish to sound > opinionated here, because I am making an effort to stick to the facts: > if we remove the intermediate, we do break the chain and the openssl > test complains loudly. > > Therefore, if your own facts correspond to the above, then the > solution is to edit nginx's source to limit "ssl_certificate" to the > leaf's public key only, and correct the description accordingly. The > intermediate(s) can be bundled in a separate file. > > It would be easier on the eyes to re-write the keywords as well: > > ssl_certificate_key -----> private_certificate > ssl_certificate 1/2 ------> public_certificate > ssl_certificate 2/2 -------> public_intermediate_certificates > ssl_trusted_certificate -> public_ca_certificate > > In so doing, the configuration would finally be unambiguous. > > problem 2 > -------------- > > If it is true that FF uses POST to *read*, by default, then this > explains the original problem with OCSP, and the fact that nginx is > well configured and openssl and other browsers do work as expected. > Google and other search engines show that Firefox has been affected by > this OCSP problem for a long time. Perhaps they could start using GET > like everybody else? Umm...please don't hijack threads. Your issue(s) are not related to the main thread and are even partially off-topic for nginx. Hijacking threads is distracting for those who run threaded clients. My issue regarding OCSP stapling still remains unresolved. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,257833,258801#msg-258801 From r at roze.lv Mon May 11 14:57:04 2015 From: r at roze.lv (Reinis Rozitis) Date: Mon, 11 May 2015 17:57:04 +0300 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> References: <2249588665C546B5862A539607548807@MasterPC> <8934241124073f9a82a03220439de750.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4D32498CEEFA4BD588D6841DD1514C90@MasterPC> > http://example.co - works fine and as expected. > http://blah.example.co - returns curl: (60) SSL certificate problem: > Invalid certificate chain > This is actually picking up the SSL cert for the default site on the > server. > So the server_name is picking up example.co but *.example.co seems to be > ignored. So are there 2 certificates? If so you need a different server block for each - one for the exact domain and one for the wildcard (or use the wildcard for both). Besides server_name .example.co and server_name example.co, *.example.co are generally the same (the latter actually being recommended http://nginx.org/en/docs/http/server_names.html#optimization ) The only difference would be if you have multiple server {} definitions (with same domains) because nginx has an order of precedence in which it decides which virtual server will be chosen (regular expressions (which is also .example.co) will be last). In general you should check (for example with 'openssl x509 -in /path/example.co.crt -noout -text | grep DNS') and see if your nginx server{} block configuration actually matches the certificates (and keys) you point to. It makes a bit hard to guess not seing the whole config. One note when testing with curl - on older systems the root certificates are not always updated and if the CA has _recently_ changed its intermediate certificates (iirc for example GoDaddy) curl might report a problem. Also be sure that the intermediate certificates are included in the certificate itself ( http://nginx.org/en/docs/http/configuring_https_servers.html#chains ) > we also need *.staging.example.co to work for our staging environment too > which might kill two birds with one stone. Standard wildcard certificate *.example.co covers also this, you don't need additional certificates. p.s. A good/simple way imo (if the server has public access) to check for all kinds of issues/ssl chains etc is to use https://www.ssllabs.com/ssltest/ (check the "do not show" if you want hidden results). rr From nginx-forum at nginx.us Mon May 11 14:59:46 2015 From: nginx-forum at nginx.us (jwroblewski) Date: Mon, 11 May 2015 10:59:46 -0400 Subject: Possible limitation of ngx_http_limit_req_module Message-ID: <648e204efe8b0603473592d0a64904ec.NginxMailingListEnglish@forum.nginx.org> Hi, I'm observing an inconsistent behavior of ngx_http_limit_req_module in nginx 1.7.12. The relevant excerpts from my config: http { ... # A fixed string used as a key, to make all requests fall into the same zone limit_req_zone test_zone zone=test_zone:1m rate=5r/s; ... server { ... location /limit { root /test limit_req zone=test_zone nodelay; } ... } } I use wrk to hammer the server for 5 secs: $ ./wrk -t 100 -c 100 -d 5 http://127.0.0.1/limit/test Running 5s test @ http://127.0.0.1/limit/test 100 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.82ms 2.96ms 15.12ms 88.92% Req/Sec 469.03 190.97 0.89k 62.05% 221531 requests in 5.00s, 81.96MB read Non-2xx or 3xx responses: 221506 Requests/sec: 44344.69 Transfer/sec: 16.41MB So, out of 221531 sent requests, 221506 came back with error. This gives (221531 - 221506) = 25 successful requests in 5 secs, so 5r/s, just as expected. So far so good. Now, what happens if I set rate=5000r/s: $ ./wrk -t 100 -c 100 -d 5 http://127.0.0.1/limit/test Running 5s test @ http://127.0.0.1/limit/test 100 threads and 100 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.64ms 5.70ms 36.58ms 87.43% Req/Sec 443.50 191.55 0.89k 65.04% 210117 requests in 4.99s, 77.38MB read Non-2xx or 3xx responses: 207671 Requests/sec: 42070.61 Transfer/sec: 15.49MB This time it is (210117 - 207671) = 2446 successful requests in 5 secs, which means 490r/s. Ten times lower then expected. I gathered some more figures, showing the number of 200 responses for growing value of "rate" parameter. rate=***r/s in zone cfg -- number of 200 responses 100 -- 87 200 -- 149 500 -- 344 1000 -- 452 10000 -- 468 100000 -- 466 As you can see, the server keeps returning pretty much constant number of 200 responses once the "rate" parameter has surpassed 1000. I had a glimpse into the module's code, and this part caught my eye: https://github.com/nginx/nginx/blob/nginx-1.7/src/http/modules/ngx_http_limit_req_module.c#L402-L414. Basically, if consecutive requests hit the server in the same millisecond, the "ms = (ngx_msec_int_t) (now - lr->last)" part evaluates to 0, which sets "excess" to 1000, which is very likely to be greater than a "burst" value, which results in rejecting the request. This could also mean, that only the very first request hitting the server in given millisecond would be handled, which seems to be in line with the wrk test results, I've presented above. Please let me know if this makes sense to you! Best regards, Jakub Wroblewski Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258803,258803#msg-258803 From nginx-forum at nginx.us Mon May 11 15:23:39 2015 From: nginx-forum at nginx.us (braindeaf) Date: Mon, 11 May 2015 11:23:39 -0400 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <4D32498CEEFA4BD588D6841DD1514C90@MasterPC> References: <4D32498CEEFA4BD588D6841DD1514C90@MasterPC> Message-ID: <1ce4860a22b8802388b622c804d694c3.NginxMailingListEnglish@forum.nginx.org> The SSL Checking service did indeed point out the error. I will admit to my own stupidity on this one. We're using Elastic Load Balancing on *.example.co ELB had only our other SSL Cert configured and not our new one. Darn it. We don't use ELB for example.co because you can't CNAME the root domain so that hit our server directly and of course with the tweaked config worked fine. Now I've added our new SSL cert to ELB, both urls work fine. Thanks very much for all of your assistance working through this, although ultimately it wasn't all Nginx. Cheers. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258792,258804#msg-258804 From nginx-forum at nginx.us Mon May 11 15:38:01 2015 From: nginx-forum at nginx.us (jwroblewski) Date: Mon, 11 May 2015 11:38:01 -0400 Subject: nginx_upstream_check_module doesn't work with nginx > 1.7.6 Message-ID: Hi, I'm not sure if this is the right place to report this issue, but perhaps someone has already run across it and has some insights... Basically, the "nginx_upstream_check_module" (versions 0.1.9 and 0.3.0) doesn't seem to be working with nginx 1.7 greater than 1.7.6. Upstreams don't get pinged for status, and calling check_status directive results in the following error: " http upstream check module can not find any check server, make sure you've added the check servers " It looks like the module is not initialized correctly, e.g. it does not receive a list of upstream servers. I also opened a github issue for the module itself -- https://github.com/yaoweibin/nginx_upstream_check_module/issues/58. Best regards, Jakub Wroblewski Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258805,258805#msg-258805 From r at roze.lv Mon May 11 15:59:34 2015 From: r at roze.lv (Reinis Rozitis) Date: Mon, 11 May 2015 18:59:34 +0300 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <1ce4860a22b8802388b622c804d694c3.NginxMailingListEnglish@forum.nginx.org> References: <4D32498CEEFA4BD588D6841DD1514C90@MasterPC> <1ce4860a22b8802388b622c804d694c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: > ELB had only our other SSL Cert configured and not our new one. Darn it. > We > don't use ELB for example.co because you can't CNAME the root domain so > that > hit our server directly and of course with the tweaked config worked fine. It's offtopic but technically you can or life always finds a way - Cloudflare for example offers a workaround as "CNAME flattening" https://blog.cloudflare.com/introducing-cname-flattening-rfc-compliant-cnames-at-a-domains-root/ with a usecase exactly for ELB. rr From reallfqq-nginx at yahoo.fr Tue May 12 08:17:14 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 12 May 2015 10:17:14 +0200 Subject: Official packages v1.8.0 do NOT include the GeoIP module In-Reply-To: References: Message-ID: Has anyone else observed the same? Is there any reason to that? --- *B. R.* On Mon, May 11, 2015 at 2:15 PM, B.R. wrote: > Hello, > > We are facing quite some trouble with the official nginx packages: > their nginx -V does not show any sign of the GeoIP module. > > Confirmed for: > - Debian package > - CentOS 6 package > > As I have not read any deprecation message anywhere, and since its > presence is confirmed in earlier versions, why is it that way? Mistake? > > ?I was unable to report a bug in http://trac.nginx.org? as I used to > connect to it through my Google account: 'OpenID 2.0 for Google Accounts > has gone away' > --- > *B. R.* > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nhadie at gmail.com Tue May 12 09:11:59 2015 From: nhadie at gmail.com (ron ramos) Date: Tue, 12 May 2015 17:11:59 +0800 Subject: Wildcard SSL and Wildcard hostnames In-Reply-To: <1ce4860a22b8802388b622c804d694c3.NginxMailingListEnglish@forum.nginx.org> References: <4D32498CEEFA4BD588D6841DD1514C90@MasterPC> <1ce4860a22b8802388b622c804d694c3.NginxMailingListEnglish@forum.nginx.org> Message-ID: hi if you are using amazon you can try their DNS service Route53. you can point the root domain to an ELB via ALIAS setting. regards, ron On Mon, May 11, 2015 at 11:23 PM, braindeaf wrote: > The SSL Checking service did indeed point out the error. I will admit to my > own stupidity on this one. We're using Elastic Load Balancing on > > *.example.co > > ELB had only our other SSL Cert configured and not our new one. Darn it. We > don't use ELB for example.co because you can't CNAME the root domain so > that > hit our server directly and of course with the tweaked config worked fine. > > Now I've added our new SSL cert to ELB, both urls work fine. Thanks very > much for all of your assistance working through this, although ultimately > it > wasn't all Nginx. > > Cheers. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258792,258804#msg-258804 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 12 09:22:53 2015 From: nginx-forum at nginx.us (ranjuj) Date: Tue, 12 May 2015 05:22:53 -0400 Subject: 24: Too many Open connections Message-ID: <1be0bcd4aaa35535a198cc4c829b93a0.NginxMailingListEnglish@forum.nginx.org> Hello, We have 3 nginx web servers behind an nginx proxy and we were seeing "24: Too many Open connections" error in nginx log of one server and found that some of the users are getting 504 timed out error. We have observer that the problem was only in one of the web-server. We restarted php_cgi & nginx and the problem got solved. Can someone help me to understand what caused this issue? As I mentioned we have 3 web-servers and all three are having the same setup and configuration. It wouldn't have caused because of heavy traffic, as it was non peak hour and other servers were not affected as well. nginx-1.2.0-1.el5 OS - CentOS release 5.8 nginx.conf worker_processes 4; worker_connections 1000000; sysctl.conf fs.file-max = 70000 limits.conf nginx soft nofile 1000000 nginx hard nofile 1000000 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258812,258812#msg-258812 From sb at nginx.com Tue May 12 10:44:56 2015 From: sb at nginx.com (Sergey Budnevitch) Date: Tue, 12 May 2015 13:44:56 +0300 Subject: Official packages v1.8.0 do NOT include the GeoIP module In-Reply-To: References: Message-ID: <89C5706A-38D7-4005-98C7-F8630D72D617@nginx.com> > On 11 May 2015, at 15:15, B.R. wrote: > > Hello, > > We are facing quite some trouble with the official nginx packages: > their nginx -V does not show any sign of the GeoIP module. Package never has geoip module, because it requires extra library (libgeoip). Current policy is to include all module without extra dependencies, so nginx from packages depends on only on pcre & openssl. > > Confirmed for: > - Debian package > - CentOS 6 package > > As I have not read any deprecation message anywhere, and since its presence is confirmed in earlier versions, why is it that way? Mistake? > > ?I was unable to report a bug in http://trac.nginx.org ? as I used to connect to it through my Google account: 'OpenID 2.0 for Google Accounts has gone away' > --- > B. R. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue May 12 11:20:51 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 12 May 2015 13:20:51 +0200 Subject: Official packages v1.8.0 do NOT include the GeoIP module In-Reply-To: <89C5706A-38D7-4005-98C7-F8630D72D617@nginx.com> References: <89C5706A-38D7-4005-98C7-F8630D72D617@nginx.com> Message-ID: Thanks Sergey, I naively though every module being documented on the official website were included. There was/is no clear documentation about what is (not) included in the official binary, helping people to decide whether it is feasible to switch from custom builds to official ones. Would it be possible to publish your short and efficient answer somewhere on the download page for official packages? --- *B. R.* On Tue, May 12, 2015 at 12:44 PM, Sergey Budnevitch wrote: > > On 11 May 2015, at 15:15, B.R. wrote: > > Hello, > > We are facing quite some trouble with the official nginx packages: > their nginx -V does not show any sign of the GeoIP module. > > > Package never has geoip module, because it requires extra library > (libgeoip). > Current policy is to include all module without extra dependencies, so > nginx > from packages depends on only on pcre & openssl. > > > Confirmed for: > - Debian package > - CentOS 6 package > > As I have not read any deprecation message anywhere, and since its > presence is confirmed in earlier versions, why is it that way? Mistake? > > ?I was unable to report a bug in http://trac.nginx.org? as I used to > connect to it through my Google account: 'OpenID 2.0 for Google Accounts > has gone away' > --- > *B. R.* > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 12 11:32:45 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 May 2015 14:32:45 +0300 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: <648e204efe8b0603473592d0a64904ec.NginxMailingListEnglish@forum.nginx.org> References: <648e204efe8b0603473592d0a64904ec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3809836.V5UsOde9sN@vbart-workstation> On Monday 11 May 2015 10:59:46 jwroblewski wrote: > Hi, > > I'm observing an inconsistent behavior of ngx_http_limit_req_module in nginx > 1.7.12. The relevant excerpts from my config: > > http { > ... > # A fixed string used as a key, to make all requests fall into the same > zone > limit_req_zone test_zone zone=test_zone:1m rate=5r/s; > ... > server { > ... > location /limit { > root /test > limit_req zone=test_zone nodelay; > } > ... > } > } > > I use wrk to hammer the server for 5 secs: > > $ ./wrk -t 100 -c 100 -d 5 http://127.0.0.1/limit/test > Running 5s test @ http://127.0.0.1/limit/test > 100 threads and 100 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 2.82ms 2.96ms 15.12ms 88.92% > Req/Sec 469.03 190.97 0.89k 62.05% > 221531 requests in 5.00s, 81.96MB read > Non-2xx or 3xx responses: 221506 > Requests/sec: 44344.69 > Transfer/sec: 16.41MB > > So, out of 221531 sent requests, 221506 came back with error. This gives > (221531 - 221506) = 25 successful requests in 5 secs, so 5r/s, just as > expected. So far so good. > > Now, what happens if I set rate=5000r/s: > > $ ./wrk -t 100 -c 100 -d 5 http://127.0.0.1/limit/test > Running 5s test @ http://127.0.0.1/limit/test > 100 threads and 100 connections > Thread Stats Avg Stdev Max +/- Stdev > Latency 3.64ms 5.70ms 36.58ms 87.43% > Req/Sec 443.50 191.55 0.89k 65.04% > 210117 requests in 4.99s, 77.38MB read > Non-2xx or 3xx responses: 207671 > Requests/sec: 42070.61 > Transfer/sec: 15.49MB > > This time it is (210117 - 207671) = 2446 successful requests in 5 secs, > which means 490r/s. Ten times lower then expected. > > I gathered some more figures, showing the number of 200 responses for > growing value of "rate" parameter. > > rate=***r/s in zone cfg -- number of 200 responses > 100 -- 87 > 200 -- 149 > 500 -- 344 > 1000 -- 452 > 10000 -- 468 > 100000 -- 466 > > As you can see, the server keeps returning pretty much constant number of > 200 responses once the "rate" parameter has surpassed 1000. > > I had a glimpse into the module's code, and this part caught my eye: > https://github.com/nginx/nginx/blob/nginx-1.7/src/http/modules/ngx_http_limit_req_module.c#L402-L414. > Basically, if consecutive requests hit the server in the same millisecond, > the "ms = (ngx_msec_int_t) (now - lr->last)" part evaluates to 0, which sets > "excess" to 1000, which is very likely to be greater than a "burst" value, > which results in rejecting the request. This could also mean, that only the > very first request hitting the server in given millisecond would be handled, > which seems to be in line with the wrk test results, I've presented above. > > Please let me know if this makes sense to you! > [..] Yes, the module is limited by millisecond timer resolution, but using so high rate values without the burst parameter is meaningless. So, just don't do that. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Tue May 12 13:25:05 2015 From: nginx-forum at nginx.us (jwroblewski) Date: Tue, 12 May 2015 09:25:05 -0400 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: <3809836.V5UsOde9sN@vbart-workstation> References: <3809836.V5UsOde9sN@vbart-workstation> Message-ID: <2781df062919612ed9062d8bb02df4da.NginxMailingListEnglish@forum.nginx.org> My use case is that upstreams are supposed to return within ~100ms, therefore using burst is not an option. I wanted to use limit_req to filter out traffic which is exceeds my backend's processing capacity, but apparently it is not the right tool to use, if it only operates with millisecond-precision... Could you please document this limitation? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258803,258824#msg-258824 From vbart at nginx.com Tue May 12 13:43:26 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 May 2015 16:43:26 +0300 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: <2781df062919612ed9062d8bb02df4da.NginxMailingListEnglish@forum.nginx.org> References: <3809836.V5UsOde9sN@vbart-workstation> <2781df062919612ed9062d8bb02df4da.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1833324.KnY0aJdEBm@vbart-workstation> On Tuesday 12 May 2015 09:25:05 jwroblewski wrote: > My use case is that upstreams are supposed to return within ~100ms, > therefore using burst is not an option. I wanted to use limit_req to filter > out traffic which is exceeds my backend's processing capacity, but > apparently it is not the right tool to use, if it only operates with > millisecond-precision... What's problem with using burst? Could you explain why it's not an option for your case? > Could you please document this limitation? > Patches are welcome. But you're the only person I can remember who cares about it. For most of the users it will be just superfluous details. wbr, Valentin V. Bartenev From reallfqq-nginx at yahoo.fr Tue May 12 14:46:29 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 12 May 2015 16:46:29 +0200 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: <1833324.KnY0aJdEBm@vbart-workstation> References: <3809836.V5UsOde9sN@vbart-workstation> <2781df062919612ed9062d8bb02df4da.NginxMailingListEnglish@forum.nginx.org> <1833324.KnY0aJdEBm@vbart-workstation> Message-ID: I do not necessarily have a say on what is discussed here, but: 1. I believe putting known limitations in the docs makes sense. Who defined the docs as sticking to the most common use cases? Technical docs are technical docs. 2. Using burst answers a specific need which has not been expressed here (on the contrary I think it is mandatory such a behavior does not happen in this use case). 3. Awaiting nginx to serve more than 1000 r/s is reasonable for a Web server claiming to scale and having been designed for HA. Silently ignoring extra requests per millisecond is awful, especially if no-one knows about it. The users are not supposed to guess by themselves that using requests limiting will make their availability silently drop... 4. This user is maybe the first, but it is over-optimistic to consider he might be alone. It could be interesting to know if you discarded documenting some other known limitations in your product(s), deciding on the behalf of the users what they should/need to know or not. --- *B. R.* On Tue, May 12, 2015 at 3:43 PM, Valentin V. Bartenev wrote: > On Tuesday 12 May 2015 09:25:05 jwroblewski wrote: > > My use case is that upstreams are supposed to return within ~100ms, > > therefore using burst is not an option. I wanted to use limit_req to > filter > > out traffic which is exceeds my backend's processing capacity, but > > apparently it is not the right tool to use, if it only operates with > > millisecond-precision... > > What's problem with using burst? Could you explain why it's not an option > for your case? > > > > Could you please document this limitation? > > > > Patches are welcome. > > But you're the only person I can remember who cares about it. For most of > the users it will be just superfluous details. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 12 16:28:15 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 May 2015 19:28:15 +0300 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: References: <3809836.V5UsOde9sN@vbart-workstation> <1833324.KnY0aJdEBm@vbart-workstation> Message-ID: <1842744.aFq5uHhJco@vbart-workstation> On Tuesday 12 May 2015 16:46:29 B.R. wrote: > I do not necessarily have a say on what is discussed here, but: > > 1. I believe putting known limitations in the docs makes sense. Who > defined the docs as sticking to the most common use cases? Technical docs > are technical docs. I'm agree with you. But there are thousands "limitations" that imposed by the algorithm used, or by the nginx internal optimizations, or by the how hardware works. If we will try to document every detail, then the documentation will be over-bloated and not human readable at all. The most detailed technical documentation is the source code itself. So we are trying to balance and don't turn our documentation into source code. > 2. Using burst answers a specific need which has not been expressed here > (on the contrary I think it is mandatory such a behavior does not happen in > this use case). Using the limit req module based on the ?leaky bucket? algorithm with the bucket size actually set to zero (i.e. burst isn't set) in most cases is just a misconfiguration. I'm sure in the current case it's a misconfiguration too. > 3. Awaiting nginx to serve more than 1000 r/s is reasonable for a Web > server claiming to scale and having been designed for HA. Silently ignoring > extra requests per millisecond is awful, especially if no-one knows about > it. > The users are not supposed to guess by themselves that using requests > limiting will make their availability silently drop... Nobody said that it cannot serve more then 1000 r/s, but you must configure burst in this case (and actually you should in most cases). Missing burst option usually is just a result of misunderstanding how leaky bucket works. > 4. This user is maybe the first, but it is over-optimistic to consider > he might be alone. Probably it's worth to be documented, but first of all let's find out if it's really a limitation, or just misconfiguration. > > It could be interesting to know if you discarded documenting some other > known limitations in your product(s), deciding on the behalf of the users > what they should/need to know or not. nginx-devel@ archive is here: http://mailman.nginx.org/pipermail/nginx-devel/ Could you find a patch for documentation that has been discarded? wbr, Valentin V. Bartenev > On Tue, May 12, 2015 at 3:43 PM, Valentin V. Bartenev > wrote: > > > On Tuesday 12 May 2015 09:25:05 jwroblewski wrote: > > > My use case is that upstreams are supposed to return within ~100ms, > > > therefore using burst is not an option. I wanted to use limit_req to > > filter > > > out traffic which is exceeds my backend's processing capacity, but > > > apparently it is not the right tool to use, if it only operates with > > > millisecond-precision... > > > > What's problem with using burst? Could you explain why it's not an option > > for your case? > > > > > > > Could you please document this limitation? > > > > > > > Patches are welcome. > > > > But you're the only person I can remember who cares about it. For most of > > the users it will be just superfluous details. > > > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > From nginx-forum at nginx.us Tue May 12 16:33:11 2015 From: nginx-forum at nginx.us (jwroblewski) Date: Tue, 12 May 2015 12:33:11 -0400 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: <1833324.KnY0aJdEBm@vbart-workstation> References: <1833324.KnY0aJdEBm@vbart-workstation> Message-ID: Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Tuesday 12 May 2015 09:25:05 jwroblewski wrote: > > My use case is that upstreams are supposed to return within ~100ms, > > therefore using burst is not an option. I wanted to use limit_req to > filter > > out traffic which is exceeds my backend's processing capacity, but > > apparently it is not the right tool to use, if it only operates with > > millisecond-precision... > > What's problem with using burst? Could you explain why it's not an > option > for your case? My nginx receives X r/s (lets assume X equals ~50000), and is supposed to respond within 100ms to every single of them. Requests are dispatched to upstreams, which can only handle a total of Y r/s in a timely manner (Y being less than X, say 20000). Knowing the capacity of my upstreams, I want nginx to *immediately* drop all excessive requests. This means, only first Y requests which came in during given second are to be pushed to upstreams, the remaining ones, starting from Y+1, are to be *immediately* 503'ed. The reason why I can not use burst, it that burst introduces queuing, which means by the time the request leaves nginx, it is already late by some milliseconds, while I want the whole solution to be as real time as possible. Having read the docs, I got the impressions that with "burst=0 nodelay" will let me achieve the goal outlined above. Burst enables "recovery" of excessive requests, while I want these dropped. Still, I might have gotten the docs wrong... > > > Could you please document this limitation? > > > > Patches are welcome. > > But you're the only person I can remember who cares about it. For > most of > the users it will be just superfluous details. I will be happy to submit a doc patch. It is possible that some users are simply not aware of this limitation, and loose requests because of it. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258803,258832#msg-258832 From vbart at nginx.com Tue May 12 17:00:01 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 12 May 2015 20:00:01 +0300 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: References: <1833324.KnY0aJdEBm@vbart-workstation> Message-ID: <2724224.A02mZAE7tP@vbart-workstation> On Tuesday 12 May 2015 12:33:11 jwroblewski wrote: > Valentin V. Bartenev Wrote: > ------------------------------------------------------- > > On Tuesday 12 May 2015 09:25:05 jwroblewski wrote: > > > My use case is that upstreams are supposed to return within ~100ms, > > > therefore using burst is not an option. I wanted to use limit_req to > > filter > > > out traffic which is exceeds my backend's processing capacity, but > > > apparently it is not the right tool to use, if it only operates with > > > millisecond-precision... > > > > What's problem with using burst? Could you explain why it's not an > > option > > for your case? > > My nginx receives X r/s (lets assume X equals ~50000), and is supposed to > respond within 100ms to every single of them. > Requests are dispatched to upstreams, which can only handle a total of Y r/s > in a timely manner (Y being less than X, say 20000). > Knowing the capacity of my upstreams, I want nginx to *immediately* drop all > excessive requests. This means, only first Y requests which came in during > given second are to be pushed to upstreams, the remaining ones, starting > from Y+1, are to be *immediately* 503'ed. The problem that there's no given second in the leaky bucket algorithm and the *immediately* doesn't a technical term, since there's nothing happens immediately. The "immediately" has some measurable time. For example setting rate=10r/s literally means every request that comes earlier than 100ms after the previous will be discarded. Even if a client have made only 100 requests but within 100 ms interval then 99 will be discarded. To allow bursts in request rate (bursts are natural and they always exist in normal http traffic) the burst parameter must be configured. > > The reason why I can not use burst, it that burst introduces queuing, which > means by the time the request leaves nginx, it is already late by some > milliseconds, while I want the whole solution to be as real time as > possible. No, it doesn't introduce any delays or queuing if you have the "nodelay" option set (btw, "nodelay" actually does nothing without the burst set). You should try something like: limit_req zone=test_zone burst=100 nodelay; Even burst=5 will let limit req with rate=5000r/s to behave in your tests as you expected. wbr, Valentin V. Bartenev > > Having read the docs, I got the impressions that with "burst=0 nodelay" will > let me achieve the goal outlined above. Burst enables "recovery" of > excessive requests, while I want these dropped. Still, I might have gotten > the docs wrong... > > > > > > Could you please document this limitation? > > > > > > > Patches are welcome. > > > > But you're the only person I can remember who cares about it. For > > most of > > the users it will be just superfluous details. > > I will be happy to submit a doc patch. It is possible that some users are > simply not aware of this limitation, and loose requests because of it. > > > > > wbr, Valentin V. Bartenev > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258803,258832#msg-258832 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From mdounin at mdounin.ru Tue May 12 17:46:50 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 12 May 2015 20:46:50 +0300 Subject: Possible limitation of ngx_http_limit_req_module In-Reply-To: References: <1833324.KnY0aJdEBm@vbart-workstation> Message-ID: <20150512174650.GA98215@mdounin.ru> Hello! On Tue, May 12, 2015 at 12:33:11PM -0400, jwroblewski wrote: > Valentin V. Bartenev Wrote: > ------------------------------------------------------- > > On Tuesday 12 May 2015 09:25:05 jwroblewski wrote: > > > My use case is that upstreams are supposed to return within ~100ms, > > > therefore using burst is not an option. I wanted to use limit_req to > > filter > > > out traffic which is exceeds my backend's processing capacity, but > > > apparently it is not the right tool to use, if it only operates with > > > millisecond-precision... > > > > What's problem with using burst? Could you explain why it's not an > > option > > for your case? > > My nginx receives X r/s (lets assume X equals ~50000), and is supposed to > respond within 100ms to every single of them. > Requests are dispatched to upstreams, which can only handle a total of Y r/s > in a timely manner (Y being less than X, say 20000). > Knowing the capacity of my upstreams, I want nginx to *immediately* drop all > excessive requests. This means, only first Y requests which came in during > given second are to be pushed to upstreams, the remaining ones, starting > from Y+1, are to be *immediately* 503'ed. > > The reason why I can not use burst, it that burst introduces queuing, which > means by the time the request leaves nginx, it is already late by some > milliseconds, while I want the whole solution to be as real time as > possible. > > Having read the docs, I got the impressions that with "burst=0 nodelay" will > let me achieve the goal outlined above. Burst enables "recovery" of > excessive requests, while I want these dropped. Still, I might have gotten > the docs wrong... The "nodelay" alone will let you achieve the goal. The "burst" should be set to a non-zero value to allow the algorithm to tolerate peaks - that is, to tolerate cases when several requests are processed at once. As timekeeping in nginx uses millisecond resolution, it certainly doesn't make sense to use burst less than expected traffic in 1ms, 20 requests in your case. In practice, 1ms is rather optimistic - e.g., a disk seek can easily take 10ms alone, and you'll see a 10ms burst if a request will trigger a disk seek. As long as you want to respond within 100ms, you'll probably should tolerate bursts equal to at least about 10ms of traffic, that is, up to 200 requests. Note well that all the words you write about "first Y requests during given second" imply counting requests over a _second_. That is, it's about burst equal to expected traffic in 1 second, 20000 requests in your case. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed May 13 02:17:27 2015 From: nginx-forum at nginx.us (Fabbbio) Date: Tue, 12 May 2015 22:17:27 -0400 Subject: Nginx 1.9 from package & RTMP Message-ID: <41fc3190ea5bb5d42707a59f59cab1eb.NginxMailingListEnglish@forum.nginx.org> I just install nginx 1.9 on my ubuntu 15.04 machine using precompiled package "apt-get install", nginx in working and now how can I add RTMP module? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258836,258836#msg-258836 From arut at nginx.com Wed May 13 05:14:51 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 May 2015 08:14:51 +0300 Subject: Nginx 1.9 from package & RTMP In-Reply-To: <41fc3190ea5bb5d42707a59f59cab1eb.NginxMailingListEnglish@forum.nginx.org> References: <41fc3190ea5bb5d42707a59f59cab1eb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <769CB763-A7D6-4304-8F9C-8F03872181F4@nginx.com> On 13 May 2015, at 05:17, Fabbbio wrote: > I just install nginx 1.9 on my ubuntu 15.04 machine using precompiled > package "apt-get install", nginx in working and now how can I add RTMP > module? You cannot add a module to a compiled nginx. To add a module you should recompile it from source with appropriate --add-module option. From nginx-forum at nginx.us Wed May 13 09:26:40 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:26:40 -0400 Subject: =?UTF-8?B?7JWE7Iuc7JWI7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv?= =?UTF-8?B?77yt44CR77y877y87JWE7Iuc7JWI7Lm07KeA64W4?= Message-ID: Asian and World Casino???????M X D 7 . C O M??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258841,258841#msg-258841 From nginx-forum at nginx.us Wed May 13 09:27:10 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:27:10 -0400 Subject: =?UTF-8?B?7JuU65Oc7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv77yt?= =?UTF-8?B?44CR77y877y87JuU65Oc7Lm07KeA64W4?= Message-ID: <7fa5276a9cacf10ad30a18d1b59b684b.NginxMailingListEnglish@forum.nginx.org> Asian and World Casino??????M X D 7 . C O M?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258842,258842#msg-258842 From nginx-forum at nginx.us Wed May 13 09:27:41 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:27:41 -0400 Subject: =?UTF-8?B?7L2U66as7JWE7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv?= =?UTF-8?B?77yt44CR77y877y87L2U66as7JWE7Lm07KeA64W4?= Message-ID: Asian and World Casino???????M X D 7 . C O M??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258843,258843#msg-258843 From nginx-forum at nginx.us Wed May 13 09:28:01 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:28:01 -0400 Subject: =?UTF-8?B?66mU6rCA7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv77yt?= =?UTF-8?B?44CR77y877y866mU6rCA7Lm07KeA64W4?= Message-ID: <37e04979a8f439bd6bda8ec9830b85f9.NginxMailingListEnglish@forum.nginx.org> Asian and World Casino??????M X D 7 . C O M?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258844,258844#msg-258844 From nginx-forum at nginx.us Wed May 13 09:28:25 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:28:25 -0400 Subject: =?UTF-8?B?7JWE7Iuc7JWI7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv?= =?UTF-8?B?77yt44CR77y877y87JWE7Iuc7JWI7Lm07KeA64W4?= Message-ID: <413983cc1b7e952211323e53569ec0ef.NginxMailingListEnglish@forum.nginx.org> Asian and World Casino???????M X D 7 . C O M??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258845,258845#msg-258845 From nginx-forum at nginx.us Wed May 13 09:28:44 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:28:44 -0400 Subject: =?UTF-8?B?7JuU65Oc7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv77yt?= =?UTF-8?B?44CR77y877y87JuU65Oc7Lm07KeA64W4?= Message-ID: <5032c17a50c082caff6fb324d3359224.NginxMailingListEnglish@forum.nginx.org> Asian and World Casino??????M X D 7 . C O M?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258846,258846#msg-258846 From nginx-forum at nginx.us Wed May 13 09:29:03 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:29:03 -0400 Subject: =?UTF-8?B?7L2U66as7JWE7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv?= =?UTF-8?B?77yt44CR77y877y87L2U66as7JWE7Lm07KeA64W4?= Message-ID: Asian and World Casino???????M X D 7 . C O M??????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258847,258847#msg-258847 From nginx-forum at nginx.us Wed May 13 09:29:27 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:29:27 -0400 Subject: =?UTF-8?B?66mU6rCA7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv77yt?= =?UTF-8?B?44CR77y877y866mU6rCA7Lm07KeA64W4?= Message-ID: <870b506fad105cf5ba6ef26efcb150f7.NginxMailingListEnglish@forum.nginx.org> Asian and World Casino??????M X D 7 . C O M?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258848,258848#msg-258848 From nginx-forum at nginx.us Wed May 13 09:29:52 2015 From: nginx-forum at nginx.us (goerge33) Date: Wed, 13 May 2015 05:29:52 -0400 Subject: =?UTF-8?B?7Zes66Gc7Lm07KeA64W477y877y844CQ77yt77y477yk77yX44CC77yj77yv77yt?= =?UTF-8?B?44CR77y877y87Zes66Gc7Lm07KeA64W4?= Message-ID: <07d1e4fb4717a85edf3d11e952395bd1.NginxMailingListEnglish@forum.nginx.org> Asian and World Casino??????M X D 7 . C O M?????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????????? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258849,258849#msg-258849 From nginx-forum at nginx.us Wed May 13 12:08:03 2015 From: nginx-forum at nginx.us (erankor2) Date: Wed, 13 May 2015 08:08:03 -0400 Subject: Serving files from a slow NFS storage In-Reply-To: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> References: <162f222547c3cb76398d18ddc0592a1e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8b1e10fe6f6cf3a2c8416569b9aea9be.NginxMailingListEnglish@forum.nginx.org> An update on this - I ended up implementing support for asynchronous file open, based on the thread pool feature that was added in nginx 1.7.11. I copied nginx's ngx_open_file_cache.c (from 1.9.0) and made it asynchronous, source code is here: https://github.com/kaltura/nginx-vod-module/blob/master/ngx_async_open_file_cache.c (you can diff it with ngx_open_file_cache.c to see the changes) If there are any nginx core developers on this thread - I would really love to see this feature make its way to the nginx core, so that I won't have this code duplication with the builtin ngx_open_file_cache. The feature was very thoroughly tested for any race conditions etc., test script is here: https://github.com/kaltura/nginx-vod-module/blob/master/test/test_open_file_cache.py Thanks, Eran Posted at Nginx Forum: http://forum.nginx.org/read.php?2,255847,258854#msg-258854 From arut at nginx.com Wed May 13 12:26:26 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 13 May 2015 15:26:26 +0300 Subject: strange behavior for cache manager In-Reply-To: <94620e11c1099cd822965b32c045e81f.NginxMailingListEnglish@forum.nginx.org> References: <7131f249e636925c63e69bdc7c4f6187.NginxMailingListEnglish@forum.nginx.org> <94620e11c1099cd822965b32c045e81f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, On 10 Apr 2015, at 13:36, stanojr wrote: > Can confirm this bug, we have same problem. But i dont know yet how to > reproduce it. > Nothing strange in logs. error_log set to notice level Here?s a patch providing a workaround for the problem and logging more information about cache entry lock problems. http://mailman.nginx.org/pipermail/nginx-ru/2015-May/055937.html Note that most probably this problem occurs when one of nginx workers is killed manually. With the patch applied, nginx cache manager skips cache entries which seem to be locked by a killed worker and logs an error. We?d like to receive more feedback from people experiencing this problem. For this please apply the patch and post (or just watch) error.log with the ?notice? loglevel since nginx start. -- Roman Arutyunyan From nginx-forum at nginx.us Thu May 14 02:26:07 2015 From: nginx-forum at nginx.us (hy05190134) Date: Wed, 13 May 2015 22:26:07 -0400 Subject: about ssl support of nginx Message-ID: <27a8faa4f708459b1789cf57c84de033.NginxMailingListEnglish@forum.nginx.org> we use "--with-http_spdy_module --with-http_ssl_module --with-openssl=$(ROOTDIR)/deps/openssl-$(V_OPE NSSL) --with-openssl-opt=darwin64-x86_64-cc" to embed ssl support into nginx in macos system , but we find that "./config" which is in the file of nginx/auto/lib/openssl/make doesn't make action, but "./Configure" can be run successfully. So, is it the problem of nginx or openssl, in my option, ssl doesn't take macos into account for embedded nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258863,258863#msg-258863 From nginx-forum at nginx.us Thu May 14 10:29:34 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 14 May 2015 06:29:34 -0400 Subject: [ANN] Windows nginx 1.9.1.1 Lizard Message-ID: 11:16 14-5-2015 nginx 1.9.1.1 Lizard Based on nginx 1.9.1.1 (8-5-2015, with 'stream' tcp load balancer) with; + pcre-8.37 (upgraded, regression tested) + During re-factoring nginx for Windows we've switched code base which makes it easier for us to import original nginx code without Windows issues by using a new native linux <> windows low level API which natively deals with spinlock, mutex locking, Windows event driven technology and full thread separation nginx 1.9 has currently 1 known issue; ajp cache which basically has an issue with the 1.7.12 code base caching (without cache ajp works fine) https://github.com/yaoweibin/nginx_ajp_module/issues/37 nb. prove05 will have crashes / failed tests due to this issue + 1.9 api change fixes across all modules - rtmp, 1.7.12.1 is the last free version with rtmp, we do have a rtmp special offer for the 1.9 branch (which without rtmp you could use to tcp load balance 1.7.12.1 with rtmp) * 1.7.12 will be kept up to date with critical patches and fixes only, no new functions will be added or imported. LTS versions are not affected * Issues with spdy: http://trac.nginx.org/nginx/ticket/714 http://trac.nginx.org/nginx/ticket/626 http://trac.nginx.org/nginx/ticket/346 disable spdy if you have this issue + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' * This release is dedicated to my beloved wife Shirley Anne aged 57 who passed away this May, I shall miss her dearly. After a 40 year relentless battle with the effects of diabetes a welsh dragon has lost her fight "Mae hen wlad fy nhadau lle rhuo y Dreigiau" Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258866,258866#msg-258866 From guido at motrixi.com Thu May 14 13:22:04 2015 From: guido at motrixi.com (Guido Accardo) Date: Thu, 14 May 2015 10:22:04 -0300 Subject: Location not found when using / Message-ID: Hi, I'm having a few problems with my routes and I'll appreciate any help that you could provide. Here is my nginx configuration: upstream internal { server 10.0.0.13:9001; server 10.0.0.13:9002; server 10.0.0.13:9003; server 10.0.0.13:9004; server 10.0.0.15:9001; server 10.0.0.15:9002; server 10.0.0.15:9003; server 10.0.0.15:9004; keepalive 1024; } server { listen 12340; location / { proxy_pass http://internal; } } Al the processes in the upstream matches the route: /process/a/N Everything is running ok, but in a random fashion routes that worked in the past, such as /process/a/1 or /process/a/2 returns as HTTP/404 and the request never reaches the upstream servers. So I think is nginx itself answering with the 404. Also, in the logs I see: /usr/local/nginx/html/process/a/1 failed (2: No such file or directory) which has no sense given I didn't set a root and in nginx.conf. Thank you in advance. -- --- Guido Accardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From zxcvbn4038 at gmail.com Thu May 14 15:59:54 2015 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Thu, 14 May 2015 11:59:54 -0400 Subject: Best approach for web farm type setup? Message-ID: What is the best approach for having nginx in a web farm type setup where I want to forward http connections to an proxy upstream if they match one of a very long/highly dynamic list of host names? All of the host names we are interested in will resolve to our address space, so could it be as simple as defining a resolver and having an allow for our CIDR's? Or do I need something more elaborate like a database of allowed hostnames? A related question might be, whats that best approach if I wanted to throw TLS into the mix? Would I need to keep SSL certs for each of my very long/highly dynamic list of hosts resident? Or is there a way to manage that more dynamically? Assume that everyone connecting supports SNI. In both cases I'm just looking for high level/best practices. I can work out the details but want to make sure I'm going the right direction and asking the right questions. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 14 16:21:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 14 May 2015 19:21:44 +0300 Subject: about ssl support of nginx In-Reply-To: <27a8faa4f708459b1789cf57c84de033.NginxMailingListEnglish@forum.nginx.org> References: <27a8faa4f708459b1789cf57c84de033.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150514162144.GD98215@mdounin.ru> Hello! On Wed, May 13, 2015 at 10:26:07PM -0400, hy05190134 wrote: > we use "--with-http_spdy_module --with-http_ssl_module > --with-openssl=$(ROOTDIR)/deps/openssl-$(V_OPE NSSL) > --with-openssl-opt=darwin64-x86_64-cc" to embed ssl support into nginx in > macos system , but we find that "./config" which is in the file of > nginx/auto/lib/openssl/make doesn't make action, but "./Configure" can be > run successfully. > > So, is it the problem of nginx or openssl, in my option, ssl doesn't take > macos into account for embedded nginx. There are two basic options: - define KERNEL_BITS environment variable to 64 to instruct OpenSSL's ./config to do the right thing; - build OpenSSL yourself with any options you want (or, e.g., install one from MacPorts), and then instruct nginx to use appropriate headers and library files using the "--with-cc-opt" and "--with-ld-opt" configure options. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri May 15 06:01:00 2015 From: nginx-forum at nginx.us (hy05190134) Date: Fri, 15 May 2015 02:01:00 -0400 Subject: about ssl support of nginx In-Reply-To: <20150514162144.GD98215@mdounin.ru> References: <20150514162144.GD98215@mdounin.ru> Message-ID: thanks a lot, I will try for your suggestions. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258863,258885#msg-258885 From 1070903212 at qq.com Fri May 15 08:38:36 2015 From: 1070903212 at qq.com (=?gb18030?B?18/B6Nauu+o=?=) Date: Fri, 15 May 2015 16:38:36 +0800 Subject: issues about nginx proxy_cache Message-ID: Hi: Dear all It is very pleasure to join in nginx mail list, but exactly i met a problem When I use nginx1.7.9 as a reverse-proxy-server. more details as follows: my design requirements are those: what I want is that nginx download the files to local by parsing response-http-302-code . But Unfortunately , nginx transmit the 302-redirect-link to my browser directly. When my browser receive the response,it download files from redirected-link. So means that It doesn't via nginx when download the video-file. for example: my-browser ----------> Server-A(nginx)---------->Server-B(Server local file) Server-C(Server has video-file) |<-------302+C-addr-------| <--------302 C-addr--------| |----------------------request video file------------------------------------------------->| |<-----------------------200 OK video file -----------------------------------------------| What my problem is Server-A dosen't cache the video file. I try to these two cache strategies as follows,but nothing effects,how can I fix it. First I use proxy_store nginx.conf as follows : ----------------------------------------------------------- events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; server { listen 8065; server_name localhost; location / { expires 3d; proxy_set_header Accept-Encoding ''; root /home/mpeg/nginx; proxy_store on; proxy_store_access user:rw group:rw all:rw; proxy_temp_path /home/mpeg/nginx; if ( !-e $request_filename) { proxy_pass http://172.30.25.246:8099; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } ------------------------------------------------------------ And then I use proxy_cache,nginx.conf as follows ------------------------------------------------------------------------ events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; client_body_buffer_size 512k; proxy_connect_timeout 10; proxy_read_timeout 180; proxy_send_timeout 5; proxy_buffer_size 16k; proxy_buffers 4 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; proxy_temp_path /home/mpeg/cache/temp; proxy_cache_path /home/mpeg/cache levels=1:2 keys_zone=content:20m inactive=1d max_size=100m; server { listen 8064; server_name localhost; location / { proxy_cache content; proxy_cache_valid 200 302 24h; proxy_cache_valid any 1d; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_cache_key $host$uri$is_args$args; proxy_pass http://192.168.15.159:7090; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } --------------------------------------------------------------------------- anything will be help , Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From runyu91372468 at sina.cn Fri May 15 09:04:37 2015 From: runyu91372468 at sina.cn (runyu91372468 at sina.cn) Date: Fri, 15 May 2015 17:04:37 +0800 Subject: =?UTF-8?B?5Zue5aSN77yaV2VsY29tZSB0byB0aGUgIm5naW54IiBtYWlsaW5nIGxpc3QgKERp?= =?UTF-8?B?Z2VzdCBtb2RlKQ==?= Message-ID: <20150515090437.1B70C4502A1@webmail.sinamail.sina.com.cn> Dear ALL: When I use nginx1.7.9 as a reverse-proxy-server. It will transmit http-302 message to my browser. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Fri May 15 10:11:12 2015 From: lists at ruby-forum.com (Serge Negodyuck) Date: Fri, 15 May 2015 12:11:12 +0200 Subject: strange behavior for cache manager In-Reply-To: <7131f249e636925c63e69bdc7c4f6187.NginxMailingListEnglish@forum.nginx.org> References: <7131f249e636925c63e69bdc7c4f6187.NginxMailingListEnglish@forum.nginx.org> Message-ID: I'm interested what linux kernel version and distribution do you use? -- Posted via http://www.ruby-forum.com/. From vbart at nginx.com Fri May 15 13:10:43 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 15 May 2015 16:10:43 +0300 Subject: Location not found when using / In-Reply-To: References: Message-ID: <2225968.SJEW1U7j3S@vbart-workstation> On Thursday 14 May 2015 10:22:04 Guido Accardo wrote: > Hi, > > I'm having a few problems with my routes and I'll appreciate any help > that you could provide. > > Here is my nginx configuration: > > upstream internal { > server 10.0.0.13:9001; > server 10.0.0.13:9002; > server 10.0.0.13:9003; > server 10.0.0.13:9004; > server 10.0.0.15:9001; > server 10.0.0.15:9002; > server 10.0.0.15:9003; > server 10.0.0.15:9004; > keepalive 1024; > } > > server { > listen 12340; > > location / { > proxy_pass http://internal; > } > } > > > Al the processes in the upstream matches the route: /process/a/N > > Everything is running ok, but in a random fashion routes that worked in the > past, such as > /process/a/1 or /process/a/2 returns as HTTP/404 and the request never > reaches the upstream servers. So I think is nginx itself answering with the > 404. > > Also, in the logs I see: > > /usr/local/nginx/html/process/a/1 failed (2: No such file or directory) > > which has no sense given I didn't set a root and in nginx.conf. > > Thank you in advance. > The configuration above isn't complete. You probably have another "server" block, which handles requests. wbr, Valentin V. Bartenev From erik.l.nelson at bankofamerica.com Fri May 15 13:22:27 2015 From: erik.l.nelson at bankofamerica.com (Nelson, Erik - 2) Date: Fri, 15 May 2015 13:22:27 +0000 Subject: notification on child process exit In-Reply-To: References: <20150514162144.GD98215@mdounin.ru> Message-ID: <9FB8528595F3BE4E9D4AAB664B7A500D16850D90@smtp_mail.bankofamerica.com> I fork + exec child processes and I would like to get some kind of notification when they exit. However, the nginx signal handler reaps my child processes and logs a message like 2015/05/14 14:12:24 [notice] 28033#0: signal 17 (SIGCHLD) received 2015/05/14 14:12:24 [notice] 28033#0: unknown process 28044 exited with code 1 Nginx knows that this process isn't one of its own children- is there some way I can either register interest in my child process, or get an event sent to me somehow on child exit? FWIW, I'm using nginx-1.7.7 with the ngx_lua extension on an older RedHat server. Thanks Erik ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. From igal at lucee.org Fri May 15 19:30:11 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Fri, 15 May 2015 12:30:11 -0700 Subject: SSL Session Cache on Windows In-Reply-To: <55563E0B.4010803@lucee.org> References: <55563E0B.4010803@lucee.org> Message-ID: <55564943.6090904@lucee.org> hi, I know that in the past the directive: ssl_session_cache shared:SSL:1m; did not work on Windows. in nginx 1.9.0 I see "*) Feature: shared memory can now be used on Windows versions with address space layout randomization." does that mean that ssl_session_cache can be used on Windows now? if so, is 1m a good value? thanks, Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 15 19:42:53 2015 From: nginx-forum at nginx.us (andiger) Date: Fri, 15 May 2015 15:42:53 -0400 Subject: X-Accel-Redirect cannot work with a filter? In-Reply-To: References: Message-ID: <6c61a54a3dfce45ef3bb0bbca9fa8f1d.NginxMailingListEnglish@forum.nginx.org> I have a similar setup, is there a way to use image_filter after the backend responded with "X-Accel-Redirect" header ? if so, how can i pass the parameters i.e. to resize the image: width and hight ? cheers andre Posted at Nginx Forum: http://forum.nginx.org/read.php?2,113039,258913#msg-258913 From mdounin at mdounin.ru Sun May 17 03:31:59 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 17 May 2015 06:31:59 +0300 Subject: SSL Session Cache on Windows In-Reply-To: <55564943.6090904@lucee.org> References: <55563E0B.4010803@lucee.org> <55564943.6090904@lucee.org> Message-ID: <20150517033159.GA72766@mdounin.ru> Hello! On Fri, May 15, 2015 at 12:30:11PM -0700, Igal @ Lucee.org wrote: > hi, > > I know that in the past the directive: > > ssl_session_cache shared:SSL:1m; > > did not work on Windows. > > in nginx 1.9.0 I see "*) Feature: shared memory can now be used on > Windows versions with address space layout randomization." > > does that mean that ssl_session_cache can be used on Windows now? if > so, is 1m a good value? Yes, now ssl_session_cache with shared cache can be used on all versions of Windows. It's not really a big deal though, as ssl_session_cache with builtin cache was available all the time, and there is no serious difference as it's not possible to use multiple worker processes on Windows. The 1m is expected to store about 4000 sessions, see http://nginx.org/r/ssl_session_cache. This is enough assuming less sessions are created during ssl_session_timeout (5m by default). -- Maxim Dounin http://nginx.org/ From igal at lucee.org Sun May 17 03:37:32 2015 From: igal at lucee.org (Igal @ Lucee.org) Date: Sat, 16 May 2015 20:37:32 -0700 Subject: SSL Session Cache on Windows In-Reply-To: <20150517033159.GA72766@mdounin.ru> References: <55563E0B.4010803@lucee.org> <55564943.6090904@lucee.org> <20150517033159.GA72766@mdounin.ru> Message-ID: <55580CFC.1020900@lucee.org> Thank you, Maxim, for your reply. > It's not really a big deal though, as > ssl_session_cache with builtin cache was available all the time, > and there is no serious difference as it's not possible to use > multiple worker processes on Windows. Based on that statement I will not bother with that setting for now. Thanks again, Igal On 5/16/2015 8:31 PM, Maxim Dounin wrote: > Hello! > > On Fri, May 15, 2015 at 12:30:11PM -0700, Igal @ Lucee.org wrote: > >> hi, >> >> I know that in the past the directive: >> >> ssl_session_cache shared:SSL:1m; >> >> did not work on Windows. >> >> in nginx 1.9.0 I see "*) Feature: shared memory can now be used on >> Windows versions with address space layout randomization." >> >> does that mean that ssl_session_cache can be used on Windows now? if >> so, is 1m a good value? > Yes, now ssl_session_cache with shared cache can be used on all > versions of Windows. It's not really a big deal though, as > ssl_session_cache with builtin cache was available all the time, > and there is no serious difference as it's not possible to use > multiple worker processes on Windows. > > The 1m is expected to store about 4000 sessions, see > http://nginx.org/r/ssl_session_cache. This is enough assuming > less sessions are created during ssl_session_timeout (5m by > default). > From jujj603 at gmail.com Sun May 17 06:36:08 2015 From: jujj603 at gmail.com (J.J J) Date: Sun, 17 May 2015 14:36:08 +0800 Subject: Question about source code: Any need to call ngx_event_pipe_remove_shadow_links in ngx_event_pipe_read_upstream? Message-ID: Hi, all: nginx code version: 1.7.9 ( I have checked v1.9.0, no change about this) The bufs used to invoke ngx_event_pipe_remove_shadow_links in ngx_event_pipe_read_upstream come from p->preread_bufs or p->free_raw_bufs or new allocated buf. Both p->preread_bufs and new allocated buf have no shadow link. p->preread_bufs is inited to be NULL, and there are two places which will add free buffer into it: ngx_event_pipe_write_chain_to_temp_file and ngx_event_pipe_write_to_downstream. In both places, shadow links are cleared by ngx_event_pipe_add_free_buf or by ngx_event_pipe_remove_shadow_links. So, there is no need to call ngx_event_pipe_remove_shadow_links in ngx_event_pipe_read_upstream at all, for shadow link will always be NULL. Am I missing something ? Or just lack of code review ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jujj603 at gmail.com Sun May 17 06:54:18 2015 From: jujj603 at gmail.com (J.J J) Date: Sun, 17 May 2015 14:54:18 +0800 Subject: notification on child process exit In-Reply-To: <9FB8528595F3BE4E9D4AAB664B7A500D16850D90@smtp_mail.bankofamerica.com> References: <20150514162144.GD98215@mdounin.ru> <9FB8528595F3BE4E9D4AAB664B7A500D16850D90@smtp_mail.bankofamerica.com> Message-ID: Signal handler is ngx_signal_handler which in worker process does nothing but log exit code and release possible holded mutexes, only master process reap child by setting ngx_reap to 1 and spawn it in ngx_reap_children if registed in ngx_processes(worker, loader, manager) So no, you can't, but you can implement this logic yourself. On Fri, May 15, 2015 at 9:22 PM, Nelson, Erik - 2 < erik.l.nelson at bankofamerica.com> wrote: > I fork + exec child processes and I would like to get some kind of > notification when they exit. However, the nginx signal handler reaps my > child processes and logs a message like > > 2015/05/14 14:12:24 [notice] 28033#0: signal 17 (SIGCHLD) received > 2015/05/14 14:12:24 [notice] 28033#0: unknown process 28044 exited with > code 1 > > Nginx knows that this process isn't one of its own children- is there some > way I can either register interest in my child process, or get an event > sent to me somehow on child exit? > > FWIW, I'm using nginx-1.7.7 with the ngx_lua extension on an older RedHat > server. > > Thanks > > Erik > > ---------------------------------------------------------------------- > This message, and any attachments, is for the intended recipient(s) only, > may contain information that is privileged, confidential and/or proprietary > and subject to important terms and conditions available at > http://www.bankofamerica.com/emaildisclaimer. If you are not the > intended recipient, please delete this message. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ??? ?? -------------- next part -------------- An HTML attachment was scrubbed... URL: From erik.l.nelson at bankofamerica.com Sun May 17 14:35:37 2015 From: erik.l.nelson at bankofamerica.com (Nelson, Erik - 2) Date: Sun, 17 May 2015 14:35:37 +0000 Subject: notification on child process exit In-Reply-To: References: <20150514162144.GD98215@mdounin.ru> <9FB8528595F3BE4E9D4AAB664B7A500D16850D90@smtp_mail.bankofamerica.com> Message-ID: <9FB8528595F3BE4E9D4AAB664B7A500D1685AF26@smtp_mail.bankofamerica.com> J.J J wrote on Sunday, May 17, 2015 2:54 AM >Signal handler is?ngx_signal_handler which in worker process does >nothing but log exit code and release possible holded mutexes, >only master process reap child by setting?ngx_reap to 1 and spawn >it in?ngx_reap_children if registed in?ngx_processes(worker, loader, manager) > >So no, you can't, but you can implement this logic yourself. Understood, thanks. I guess I'll just add a --no-reap-unknown command line argument to suppress reaping of unknown processes. >On Fri, May 15, 2015 at 9:22 PM, Nelson, Erik wrote: >>I fork + exec child processes and I would like to get some >>kind of notification when they exit.? However, the nginx >>signal handler reaps my child processes and logs a message like >> >>2015/05/14 14:12:24 [notice] 28033#0: signal 17 (SIGCHLD) received >>2015/05/14 14:12:24 [notice] 28033#0: unknown process 28044 exited with code 1 >> >>Nginx knows that this process isn't one of its own children- >>is there some way I can either register interest in my child process, >>or get an event sent to me somehow on child exit? ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. From nginx-forum at nginx.us Sun May 17 17:07:30 2015 From: nginx-forum at nginx.us (EvilMoe) Date: Sun, 17 May 2015 13:07:30 -0400 Subject: upstream redirect instead proxy_pass Message-ID: Hello, I would like to use Nginx as Load Balancer (traffic). My config is: upstream storages { least_conn; server str1 weight=1 max_fails=1 fail_timeout=10s; server str2 weight=1 max_fails=1 fail_timeout=10s; } server { listen 80; server_name verteilen; location / { proxy_pass http://storages; #return 302 $scheme://storages; } } How can I redirect to the server of upstream? With proxy_pass does it work but I want to move the traffic to several servers. I just need the "storages" variable. Sven Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258940,258940#msg-258940 From nginx-forum at nginx.us Sun May 17 23:32:17 2015 From: nginx-forum at nginx.us (gariac) Date: Sun, 17 May 2015 19:32:17 -0400 Subject: example.com is found, but not www.example.com Message-ID: <32fd9565241af3293c53f229f1483c75.NginxMailingListEnglish@forum.nginx.org> Well this looks so simple in the nginx manual. I have cleared the browser cache. so I am running out of simple idea. The domain is inplanesight.org. http://www.inplanesight.org will 404 http://inplanesight.org works fine Here is the server part of the nginx.conf file: --------------------------------------------------------------- server { listen 80; server_name inplanesight.org www.inplanesight.org; #charset koi8-r; #access_log logs/host.access.log main; access_log /var/log/nginx/access.log; root /usr/local/www/nginx; location / { try_files $uri $uri/ =404; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www/nginx-dist; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} location ~ \.php$ { try_files $uri =404; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $request_filename; include fastcgi_params; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } If it matters, I have two active server sections in the nginx.conf file. This is the start of the second section: ----------------------------------------------- server { listen 80; server_name lazygranch.xyz www.lazygranch.xyz; ----------------------------------- Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258943,258943#msg-258943 From r at roze.lv Sun May 17 23:54:24 2015 From: r at roze.lv (Reinis Rozitis) Date: Mon, 18 May 2015 02:54:24 +0300 Subject: example.com is found, but not www.example.com In-Reply-To: <32fd9565241af3293c53f229f1483c75.NginxMailingListEnglish@forum.nginx.org> References: <32fd9565241af3293c53f229f1483c75.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Well this looks so simple in the nginx manual. I have cleared the browser > cache. so I am running out of simple idea. The domain is inplanesight.org. > http://www.inplanesight.org will 404 > http://inplanesight.org works fine It's a dns problem - www.inplanesight.org doesn't resolve http://www.dnswatch.info/dns/dnslookup?host=www.inplanesight.org rr From jujj603 at gmail.com Mon May 18 01:29:25 2015 From: jujj603 at gmail.com (J.J J) Date: Mon, 18 May 2015 09:29:25 +0800 Subject: issues about nginx proxy_cache In-Reply-To: References: Message-ID: If you access it correctly, nginx will send the request to backend and pass response which is 302-redirect-link in your case back to the browser. Check the "302-redirect-link", I think it should be an address to http://192.168.15.159:7090 not to nginx(port 8064/8065), that's why the browser bypasses nginx. Also note, proxy_store does nothing to the stored file, it's quite different from proxy_cache. On Fri, May 15, 2015 at 4:38 PM, ???? <1070903212 at qq.com> wrote: > Hi: > Dear all > It is very pleasure to join in nginx mail list, but exactly i met a > problem When I use nginx1.7.9 as a reverse-proxy-server. more details as > follows: > my design requirements are those: > what I want is that nginx download the files to local by parsing > response-http-302-code . > But Unfortunately , nginx transmit the 302-redirect-link to my browser > directly. When my browser receive the response,it download files from > redirected-link. > So means that It doesn't via nginx when download the video-file. > > for example: > > my-browser ----------> Server-A(nginx)---------->Server-B(Server local > file) Server-C(Server has video-file) > |<-------302+C-addr-------| <--------302 C-addr--------| > |----------------------request video > file------------------------------------------------->| > |<-----------------------200 OK video file > -----------------------------------------------| > > What my problem is Server-A dosen't cache the video file. > I try to these two cache strategies as follows,but nothing effects,how can > I fix it. > > > First I use proxy_store nginx.conf as follows : > ----------------------------------------------------------- > events { > worker_connections 1024; > } > http { > include mime.types; > default_type application/octet-stream; > server { > listen 8065; > server_name localhost; > location / { > expires 3d; > proxy_set_header Accept-Encoding ''; > root /home/mpeg/nginx; > proxy_store on; > proxy_store_access user:rw group:rw all:rw; > proxy_temp_path /home/mpeg/nginx; > if ( !-e $request_filename) { > proxy_pass http://172.30.25.246:8099; > } > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > } > } > ------------------------------------------------------------ > And then I use proxy_cache,nginx.conf as follows > ------------------------------------------------------------------------ > events { > worker_connections 1024; > } > http { > include mime.types; > default_type application/octet-stream; > client_body_buffer_size 512k; > proxy_connect_timeout 10; > proxy_read_timeout 180; > proxy_send_timeout 5; > proxy_buffer_size 16k; > proxy_buffers 4 64k; > proxy_busy_buffers_size 128k; > proxy_temp_file_write_size 128k; > proxy_temp_path /home/mpeg/cache/temp; > proxy_cache_path /home/mpeg/cache levels=1:2 keys_zone=content:20m > inactive=1d max_size=100m; > server { > listen 8064; > server_name localhost; > location / { > proxy_cache content; > proxy_cache_valid 200 302 24h; > proxy_cache_valid any 1d; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_cache_key $host$uri$is_args$args; > proxy_pass http://192.168.15.159:7090; > } > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > } > > } > --------------------------------------------------------------------------- > anything will be help , Thanks > > > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 18 01:57:52 2015 From: nginx-forum at nginx.us (winnall) Date: Sun, 17 May 2015 21:57:52 -0400 Subject: nginx and php5-fpm have stopped working Message-ID: <80bb1ddbaaff2089db73e2b6c5416217.NginxMailingListEnglish@forum.nginx.org> I am moving a Drupal 7 application on Ubuntu 14.04 from development to production. I use nginx (1.4.6-1ubuntu3.2) and php5-fpm (5.5.9+dfsg-1ubuntu4.9). The production machine is a VPS hosted at 1&1 and was running alright up until about 4 hours ago. Nginx had been giving some errors on startup: 2015/05/17 16:21:39 [info] 27859#0: Using 32768KiB of shared memory for push module in /etc/nginx/nginx.conf:85 2015/05/17 16:21:39 [alert] 27859#0: mmap(MAP_ANON|MAP_SHARED, 1048576) failed (12: Cannot allocate memory) but nginx and php5-fpm were working alright. Later, however, after I had uploaded some corrected theme data (basically CSS generated from SASS), nginx went down. Since then I have been unable to make nginx AND php5-fpm start. I can start one or the other. Unfortunately php5-fpm has not generated any error messages that I have been able to find. Rebooting doesn't solve the problem. Nginx, however, now generates the following error. 2015/05/17 23:40:40 [alert] 1559#0: mmap(MAP_ANON|MAP_SHARED, 33554432) failed (12: Cannot allocate memory) I'm pretty sure I have enough memory. The command free -m gives: total used free shared buffers cached Mem: 8192 168 8023 126 0 149 -/+ buffers/cache: 19 8172 Swap: 0 0 0 I have seen other discussions with similar symptoms which seem to suggest that parameters of the VPS need to be adjusted by the provider. Is this such a case? If so, which parameters do I need to ask to have adjusted? I'm out of my depth on this, so I'd be grateful for any assistance anyone could offer. Steve Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258946,258946#msg-258946 From sjums07 at gmail.com Mon May 18 06:07:31 2015 From: sjums07 at gmail.com (Nikolaj Schomacker) Date: Mon, 18 May 2015 06:07:31 +0000 Subject: example.com is found, but not www.example.com In-Reply-To: References: <32fd9565241af3293c53f229f1483c75.NginxMailingListEnglish@forum.nginx.org> Message-ID: You need to point www.inplanesight.org to the same IP as inplanesight.org. You can also make an A record in your DNS for *.inplanesigt.org which will act as a "catch all" for any subdomain. On Mon, May 18, 2015 at 1:54 AM Reinis Rozitis wrote: > > Well this looks so simple in the nginx manual. I have cleared the browser > > cache. so I am running out of simple idea. The domain is > inplanesight.org. > > http://www.inplanesight.org will 404 > > http://inplanesight.org works fine > > It's a dns problem - www.inplanesight.org doesn't resolve > http://www.dnswatch.info/dns/dnslookup?host=www.inplanesight.org > > rr > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon May 18 08:48:40 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Mon, 18 May 2015 04:48:40 -0400 Subject: Satistfy any not working as expected Message-ID: Hi, I'm facing an issue using the "satisfy any" directive. What I'm trying to achieve is quite simple: - have an auth_request directive protecting the entire website (hence set at the server level in the config file) - have no such authentication for the local network I've put the following lines in my nginx config file, under the 'server' directive: ---------------------------- server { satisfy any; allow 192.168.0.0/24; deny all; auth_request /path/to/authRequestScript.php; [...] } ---------------------------- Although that works well for the local network (ie: no authentication required anymore), I get a "403 Forbidden" message when I'm connecting from the outside network where I would expect the usual authentication mecanism to be triggered. All the exemples I found rely on the "location /" directive, but I'd like it to be at the server level. What am I doing wrong ? Thanks for any help, Arno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258955,258955#msg-258955 From reallfqq-nginx at yahoo.fr Mon May 18 09:10:44 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 18 May 2015 11:10:44 +0200 Subject: example.com is found, but not www.example.com In-Reply-To: References: <32fd9565241af3293c53f229f1483c75.NginxMailingListEnglish@forum.nginx.org> Message-ID: Catch-all records are not to be advised. ?Use a CNAME for www subdomain pointing to the ?base domain, rather than another A record (easier maintenance... CNAME are here for a reason!). ?Definitely not a nginx problem though...? --- *B. R.* On Mon, May 18, 2015 at 8:07 AM, Nikolaj Schomacker wrote: > You need to point www.inplanesight.org to the same IP as inplanesight.org. > > You can also make an A record in your DNS for *.inplanesigt.org which > will act as a "catch all" for any subdomain. > > On Mon, May 18, 2015 at 1:54 AM Reinis Rozitis wrote: > >> > Well this looks so simple in the nginx manual. I have cleared the >> browser >> > cache. so I am running out of simple idea. The domain is >> inplanesight.org. >> > http://www.inplanesight.org will 404 >> > http://inplanesight.org works fine >> >> It's a dns problem - www.inplanesight.org doesn't resolve >> http://www.dnswatch.info/dns/dnslookup?host=www.inplanesight.org >> >> rr >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Mon May 18 10:38:42 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 18 May 2015 13:38:42 +0300 Subject: issues about nginx proxy_cache In-Reply-To: References: Message-ID: <25124690.79G240CHbO@vbart-workstation> On Friday 15 May 2015 16:38:36 ???? wrote: > Hi: > Dear all > It is very pleasure to join in nginx mail list, but exactly i met a problem When I use nginx1.7.9 as a reverse-proxy-server. more details as follows: > my design requirements are those: > what I want is that nginx download the files to local by parsing response-http-302-code . You should use X-Accel-Redirect instead. See: http://wiki.nginx.org/X-accel wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon May 18 12:31:12 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Mon, 18 May 2015 08:31:12 -0400 Subject: TCP Connection details Message-ID: <90b7d602f5a27dbec2cecad5d67f94e0.NginxMailingListEnglish@forum.nginx.org> I am using the Nginx as a reverse proxy and I want to find out the TCP connection information on both east and west bound connections. With the following params on the access log, I am able to get the info about the client TCP connection. Now, I want to find the RTT between Nginx and the backend webserver. Any idea how can I get this ? $tcpinfo_rtt, $tcpinfo_rttvar, $tcpinfo_snd_cwnd, $tcpinfo_rcv_space information about the client TCP connection; available on systems that support the TCP_INFO socket option Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258958,258958#msg-258958 From mdounin at mdounin.ru Mon May 18 12:44:44 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 May 2015 15:44:44 +0300 Subject: Satistfy any not working as expected In-Reply-To: References: Message-ID: <20150518124444.GC11860@mdounin.ru> Hello! On Mon, May 18, 2015 at 04:48:40AM -0400, Arno0x0x wrote: > Hi, > > I'm facing an issue using the "satisfy any" directive. What I'm trying to > achieve is quite simple: > - have an auth_request directive protecting the entire website (hence set at > the server level in the config file) > - have no such authentication for the local network > > I've put the following lines in my nginx config file, under the 'server' > directive: > > ---------------------------- > server { > > satisfy any; > allow 192.168.0.0/24; > deny all; > > auth_request /path/to/authRequestScript.php; > [...] > } > ---------------------------- > > Although that works well for the local network (ie: no authentication > required anymore), I get a "403 Forbidden" message when I'm connecting from > the outside network where I would expect the usual authentication mecanism > to be triggered. > > All the exemples I found rely on the "location /" directive, but I'd like it > to be at the server level. > > What am I doing wrong ? There is no real difference between configuring this at location or at server level - as long as requests to "/path/to/authRequestScript.php" are properly handled. In your case, "403 Forbidden" suggests they aren't handled properly - this may happen, e.g., because you incorrecly specified URI (note that the parameter of auth_request is URI, not file path), or because the php script isn't properly run, or because the script itself does a wrong thing. The error log may have some details for you, try looking into it. Note well that if you want "the usual authentication mecanism", then auth_request is probably not for you, and you should use auth_basic instead, see here: http://nginx.org/en/docs/http/ngx_http_auth_basic_module.html The auth request module is only needed when you want to code some custom authentication yourself. -- Maxim Dounin http://nginx.org/ From znzwmz at gmail.com Mon May 18 12:57:02 2015 From: znzwmz at gmail.com (=?gb2312?B?1cW64w==?=) Date: Mon, 18 May 2015 20:57:02 +0800 Subject: get some errors when i install openresty Message-ID: <6C66EDBD-435B-482D-95C1-3F4133EAF026@gmail.com> some errors: x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/042-crc32.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/107-timer-errors.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/123-lua-path.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/127-uthread-kill.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/098-uthread-wait.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/026-mysql.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/052-sub-dfa.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/086-init-by.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/070-sha1.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/044-req-body.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/006-escape.t x ngx_openresty-1.7.10.1/bundle/ngx_lua-0.9.15/t/093-uthread-spawn.t: (Empty error message) tar: Error exit delayed from previous errors. Please help me. From nginx-forum at nginx.us Mon May 18 14:43:55 2015 From: nginx-forum at nginx.us (winnall) Date: Mon, 18 May 2015 10:43:55 -0400 Subject: nginx and php5-fpm have stopped working In-Reply-To: <80bb1ddbaaff2089db73e2b6c5416217.NginxMailingListEnglish@forum.nginx.org> References: <80bb1ddbaaff2089db73e2b6c5416217.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f49b5997875ab53c6315308be6afee3.NginxMailingListEnglish@forum.nginx.org> I've got round this by removing nginx-extras. The error messages no longer appear in the logs and - more importantly - nginx and php5-fpm both now run again together. The website still doesn't work properly, but I think that's a Drupal problem Steve Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258946,258966#msg-258966 From mdounin at mdounin.ru Mon May 18 15:03:52 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 May 2015 18:03:52 +0300 Subject: upstream redirect instead proxy_pass In-Reply-To: References: Message-ID: <20150518150352.GE11860@mdounin.ru> Hello! On Sun, May 17, 2015 at 01:07:30PM -0400, EvilMoe wrote: > Hello, > > I would like to use Nginx as Load Balancer (traffic). My config is: > > > upstream storages { > least_conn; > server str1 weight=1 max_fails=1 fail_timeout=10s; > server str2 weight=1 max_fails=1 fail_timeout=10s; > } > > > server { > listen 80; > server_name verteilen; > location / { > proxy_pass http://storages; > #return 302 $scheme://storages; > } > } > > > > > How can I redirect to the server of upstream? With proxy_pass does it work > but I want to move the traffic to several servers. > I just need the "storages" variable. The "upstream" directives defines servers for proxy - and things like "least_conn" balancer in your config are not at all possible unless nginx proxies connections. If you want nginx to return redirects, you can use other mechanisms available, like split_clients: http://nginx.org/en/docs/http/ngx_http_split_clients_module.html -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon May 18 19:11:08 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 18 May 2015 22:11:08 +0300 Subject: Question about source code: Any need to call ngx_event_pipe_remove_shadow_links in ngx_event_pipe_read_upstream? In-Reply-To: References: Message-ID: <20150518191107.GF11860@mdounin.ru> Hello! On Sun, May 17, 2015 at 02:36:08PM +0800, J.J J wrote: > Hi, all: > > nginx code version: 1.7.9 ( I have checked v1.9.0, no change about this) > > The bufs used to invoke ngx_event_pipe_remove_shadow_links in > ngx_event_pipe_read_upstream come from p->preread_bufs or p->free_raw_bufs > or new allocated buf. > > Both p->preread_bufs and new allocated buf have no shadow link. > > p->preread_bufs is inited to be NULL, and there are two places which will > add free buffer into it: ngx_event_pipe_write_chain_to_temp_file and > ngx_event_pipe_write_to_downstream. > > In both places, shadow links are cleared by ngx_event_pipe_add_free_buf or > by ngx_event_pipe_remove_shadow_links. > > So, there is no need to call ngx_event_pipe_remove_shadow_links > in ngx_event_pipe_read_upstream at all, for shadow link will always be NULL. > > Am I missing something ? Or just lack of code review ? Most likely you are right and these calls are not needed. I came to a similar conslusion while looking into this code a while ago. The code to work with shadow links was introduced in ancient times, before nginx 0.1.0, and it needs some cleanup. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon May 18 22:26:19 2015 From: nginx-forum at nginx.us (gariac) Date: Mon, 18 May 2015 18:26:19 -0400 Subject: example.com is found, but not www.example.com In-Reply-To: References: Message-ID: <94b723451a003b673c0de21cde7a6469.NginxMailingListEnglish@forum.nginx.org> Thanks all. I will not use the catch all but will enter the www.example.com in the name server. I spent some time reading up on Nginx to be er um less stupid, but should have spent more time on DNS. ;-) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258943,258976#msg-258976 From emailbuilder88 at yahoo.com Tue May 19 03:07:25 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Mon, 18 May 2015 20:07:25 -0700 Subject: Case insensitive exact location match? Message-ID: <1432004845.99455.YahooMailBasic@web142401.mail.bf1.yahoo.com> I know how to do case insensitive regex location matching. But it would be very useful if I could do same with exact string matching, something like location =* /test So it would matching "/test" as well as "/TEST" Or some other way to convert case of the request string without needing for regex engine? From nginx-forum at nginx.us Tue May 19 10:11:27 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Tue, 19 May 2015 06:11:27 -0400 Subject: DNS configuration to invoke complete URL Message-ID: <9c755668b4549afefa942f2e9c8b86c6.NginxMailingListEnglish@forum.nginx.org> Hi Talents, Need your support immediately on the below requirement: I have configure DNS "worsktream.com" to proxy pass the complete URL "http://worsktream.com/workstream/agentLogin" while invoking the dns name following message is thrown: - error 414 Request-URI Too Large And after configuring following to fix the above issue syntax: large_client_header_buffers number size default: large_client_header_buffers 4 4k/8k Now, the below issue is reported - nginx accept() failed (24 too many open files) I could see the logs are getting piled up with so many hits on the request URL. Indefinite loop is seen. Please suggest to resolve this. Best regards, Maddy Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,258982#msg-258982 From znzwmz at gmail.com Tue May 19 10:08:45 2015 From: znzwmz at gmail.com (=?gb2312?B?1tzOrMP3?=) Date: Tue, 19 May 2015 18:08:45 +0800 Subject: Get some errors when i install openresty on mac Message-ID: <36AF3641-4745-4DF2-A95C-3C406F33EB18@gmail.com> I have some errors when i install openresty on mac: Please help me. Thank you in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.tiff Type: image/tiff Size: 306626 bytes Desc: not available URL: From znzwmz at gmail.com Tue May 19 10:13:29 2015 From: znzwmz at gmail.com (=?gb2312?B?1tzOrMP3?=) Date: Tue, 19 May 2015 18:13:29 +0800 Subject: Get some errors when i install openresty on mac Message-ID: <34DBB241-A246-4CA8-8B0C-7647B793D88D@gmail.com> I have some errors when i install openresty on mac: use command: tar xzvf ngx_openresty-1.7.10.1.tar.gz Please help me. Thank you in advance. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.tiff Type: image/tiff Size: 306626 bytes Desc: not available URL: From sb at nginx.com Tue May 19 11:58:08 2015 From: sb at nginx.com (Sergey Budnevitch) Date: Tue, 19 May 2015 14:58:08 +0300 Subject: Official packages v1.8.0 do NOT include the GeoIP module In-Reply-To: References: <89C5706A-38D7-4005-98C7-F8630D72D617@nginx.com> Message-ID: <524E281C-AE55-4F83-B626-00C40A9768EC@nginx.com> > On 12 May 2015, at 14:20, B.R. wrote: > > Thanks Sergey, > > I naively though every module being documented on the official website were included. > There was/is no clear documentation about what is (not) included in the official binary, helping people to decide whether it is feasible to switch from custom builds to official ones. > > Would it be possible to publish your short and efficient answer somewhere on the download page for official packages? I just added configure arguments list and the module policy to the package download page. -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue May 19 13:13:12 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 19 May 2015 15:13:12 +0200 Subject: Official packages v1.8.0 do NOT include the GeoIP module In-Reply-To: <524E281C-AE55-4F83-B626-00C40A9768EC@nginx.com> References: <89C5706A-38D7-4005-98C7-F8630D72D617@nginx.com> <524E281C-AE55-4F83-B626-00C40A9768EC@nginx.com> Message-ID: Thank you Sergey! --- *B. R.* On Tue, May 19, 2015 at 1:58 PM, Sergey Budnevitch wrote: > > On 12 May 2015, at 14:20, B.R. wrote: > > Thanks Sergey, > > I naively though every module being documented on the official website > were included. > There was/is no clear documentation about what is (not) included in the > official binary, helping people to decide whether it is feasible to switch > from custom builds to official ones. > > Would it be possible to publish your short and efficient answer somewhere > on the download page for official packages? > > > I just added configure arguments list and the module policy to the package > download page. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 19 16:31:35 2015 From: nginx-forum at nginx.us (EvilMoe) Date: Tue, 19 May 2015 12:31:35 -0400 Subject: upstream redirect instead proxy_pass In-Reply-To: <20150518150352.GE11860@mdounin.ru> References: <20150518150352.GE11860@mdounin.ru> Message-ID: But split_clients has no health check, right? WOuld it be possible to check the servers too like with upstream? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258940,258991#msg-258991 From nginx-forum at nginx.us Tue May 19 18:20:39 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Tue, 19 May 2015 14:20:39 -0400 Subject: Satistfy any not working as expected In-Reply-To: <20150518124444.GC11860@mdounin.ru> References: <20150518124444.GC11860@mdounin.ru> Message-ID: Hi Maxim, Thanks for your answer. I'm actually using a proper URI in the auth_request parameter and the PHP script works fine (https://github.com/Arno0x/TwoFactorAuth), my example was dumb. For the records, here's what I did to make it work exactly as I expect: simply remove the "deny all;" statement. As a result : - Any local network IP gets a straight access - Any other IP has to go through the auth_request This makes sense to me as a "satisfy any" coupled with a "deny all;" would always match "all" and refuse access. Not sure why all configuration examples we can find on the web mention the "deny all;" statement, but this fails for me. By the way, many thanks for all the work done on Nginx ! Cheers, Arno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258955,258993#msg-258993 From mdounin at mdounin.ru Tue May 19 19:16:45 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 19 May 2015 22:16:45 +0300 Subject: Satistfy any not working as expected In-Reply-To: References: <20150518124444.GC11860@mdounin.ru> Message-ID: <20150519191645.GJ11860@mdounin.ru> Hello! On Tue, May 19, 2015 at 02:20:39PM -0400, Arno0x0x wrote: > Hi Maxim, > > Thanks for your answer. I'm actually using a proper URI in the auth_request > parameter and the PHP script works fine > (https://github.com/Arno0x/TwoFactorAuth), my example was dumb. > > For the records, here's what I did to make it work exactly as I expect: > simply remove the "deny all;" statement. > > As a result : > - Any local network IP gets a straight access > - Any other IP has to go through the auth_request > > This makes sense to me as a "satisfy any" coupled with a "deny all;" would > always match "all" and refuse access. > > Not sure why all configuration examples we can find on the web mention the > "deny all;" statement, but this fails for me. The "deny all;" statement shouldn't change anything. With "satisfy any;" access is allowed as long as one of the modules allows access, and restrictions imposed by other modules are ignored. The idea is that you configure several independent access checks and then combine them: either with AND ("satisfy all", all checks have to succeed) or with OR ("satisfy any", any successful check is sufficient). Simple config for testing: server { listen 8080; satisfy any; deny all; auth_request /auth; location / { # index.html expected under root } location = /auth { return 204; } } If removing "deny all;" works for you, it means that you are testing something wrong. In particular, make sure that the config you are testing is actually loaded, it does contain "satisfy any", and it's not overwritten somewhere in locations. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue May 19 20:06:01 2015 From: nginx-forum at nginx.us (ManuelRighi) Date: Tue, 19 May 2015 16:06:01 -0400 Subject: swf problem with reverse proxy Message-ID: <6f1e7ae8f2d5ff757fc4987b4923b1f8.NginxMailingListEnglish@forum.nginx.org> Hello, I have a nginx reverse proxy behind nginx web server. I have a simple flash site, with swf embedded in html page. I have a problem with this page, swf start loading but does not end. If I point site directly to web server, swf work correctly. Can you help me ? My reverse proxy settings is these: location / { proxy_pass http://Backend; proxy_http_version 1.1; proxy_cache STATIC; proxy_set_header Host $host; proxy_set_header Accept-Encoding ""; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; add_header X-Cache-Status $upstream_cache_status; proxy_ignore_headers Cache-Control; proxy_no_cache $http_pragma $http_authorization $cookie_nocache $arg_nocache; proxy_cache_bypass $http_pragma $http_authorization $cookie_nocache $arg_nocache; proxy_cache_valid 403 1m; proxy_cache_use_stale error timeout invalid_header updating http_500 http_502 http_503 http_504; proxy_cache_methods HEAD GET; proxy_cache_key $host$request_uri$cookie_user; } Tnx Manuel Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258996,258996#msg-258996 From best1yash at gmail.com Wed May 20 06:46:13 2015 From: best1yash at gmail.com (Yash Shrivastava) Date: Wed, 20 May 2015 12:16:13 +0530 Subject: Using diameter aaa protocol with nginx Message-ID: Hi, I want to set up a AAA layer over the nginx server (for approxiamtely 2000 users at a time). I saw the available options, and found diameter and tacacs+ to be the best. I found there's a PAM module/library for implementing tacacs+ with nginx. Is there a similar module for Diameter as it is more suitable for my requirements? If not what is the procedure to integrate tacacs+ with nginx. I am using Ubuntu 14.04. -- Yash Shrivastava Second Year Undergraduate Student Department of Computer Science and Engineering IIT Kharagpur -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 20 08:42:20 2015 From: nginx-forum at nginx.us (cyrille) Date: Wed, 20 May 2015 04:42:20 -0400 Subject: Allow all Access-Control-Allow-Headers Message-ID: <754aa68c6419f87a20c1f5733edf0fc0.NginxMailingListEnglish@forum.nginx.org> Hi, I have a nginx in front of many differrents web applications. At this time a have a generic configuration for all applications. Now I need to allow all Access-Control-Allow-Headers but I did not find how to do this. One of the web application behind my nginx reverse proxy set custom headers. At this time i create a specific nginx configuration file for this application with configuration : if ($request_method = 'OPTIONS') { add_header 'Access-Control-Allow-Origin' '*'; add_header 'Access-Control-Allow-Credentials' 'true'; add_header 'Access-Control-Allow-Methods' 'GET, POST, OPTIONS'; add_header 'Access-Control-Allow-Headers' 'traceId,channel,callerId,version,type'; add_header 'Access-Control-Max-Age' 1728000; add_header 'Content-Type' 'text/plain charset=UTF-8'; add_header 'Content-Length' 0; return 204; } But I don't like this solution because I need to have a generic configuration for all web applications behind the nginx. Is it possible to allow all "Access-Control-Allow-Headers" for all request ? Thanks for your reply. Regards. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258998,258998#msg-258998 From jujj603 at gmail.com Wed May 20 08:46:58 2015 From: jujj603 at gmail.com (J.J J) Date: Wed, 20 May 2015 16:46:58 +0800 Subject: Case insensitive exact location match? In-Reply-To: <1432004845.99455.YahooMailBasic@web142401.mail.bf1.yahoo.com> References: <1432004845.99455.YahooMailBasic@web142401.mail.bf1.yahoo.com> Message-ID: No, exact and inclusive match are case sensitive. You can write it in two location : location = /test location = /TEST Personally, I think it's a bad practice to differentiate requests by the case(ness) of URI. On Tue, May 19, 2015 at 11:07 AM, E.B. wrote: > I know how to do case insensitive regex location matching. But it would be > very useful if I could do same with exact string matching, something like > > location =* /test > > So it would matching "/test" as well as "/TEST" > > Or some other way to convert case of the request string without > needing for regex engine? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 20 10:03:16 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 20 May 2015 06:03:16 -0400 Subject: Case insensitive exact location match? In-Reply-To: References: Message-ID: J.J J Wrote: ------------------------------------------------------- > No, exact and inclusive match are case sensitive. > > You can write it in two location : > location = /test > location = /TEST > > Personally, I think it's a bad practice to differentiate requests by > the > case(ness) of URI. Hmm so you also need to add location = /Test location = /TEst location = /TESt ........ It should be a server/location block config item. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258977,259000#msg-259000 From reallfqq-nginx at yahoo.fr Wed May 20 10:35:13 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 20 May 2015 12:35:13 +0200 Subject: Case insensitive exact location match? In-Reply-To: References: Message-ID: 2 ?^4 = all 16 combinations. Hopefully the 'test' sting is only 4 characters long... Case-insensitiveness is no-trivial check, and if needed PCRE are here for that through regex locations. What is wrong with them already?? --- *B. R.* On Wed, May 20, 2015 at 12:03 PM, itpp2012 wrote: > J.J J Wrote: > ------------------------------------------------------- > > No, exact and inclusive match are case sensitive. > > > > You can write it in two location : > > location = /test > > location = /TEST > > > > Personally, I think it's a bad practice to differentiate requests by > > the > > case(ness) of URI. > > Hmm so you also need to add > > location = /Test > location = /TEst > location = /TESt > ........ > > It should be a server/location block config item. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258977,259000#msg-259000 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 20 11:03:25 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 20 May 2015 07:03:25 -0400 Subject: Case insensitive exact location match? In-Reply-To: References: Message-ID: <220e9c84ab5a7d3595c5d6b5389b6172.NginxMailingListEnglish@forum.nginx.org> B.R. Wrote: ------------------------------------------------------- > 2 > ?^4 = all 16 combinations. Hopefully the 'test' sting is only 4 > characters > long... > > Case-insensitiveness is no-trivial check, and if needed PCRE are here > for > that through regex locations. > What is wrong with them already?? At assembly level alot, +-20 bytes including an if or alot more when it has to go through pcre. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258977,259005#msg-259005 From nginx-forum at nginx.us Wed May 20 14:33:40 2015 From: nginx-forum at nginx.us (zilog80) Date: Wed, 20 May 2015 10:33:40 -0400 Subject: long wait configtest In-Reply-To: <6597777.HjAC5QEC5v@vbart-workstation> References: <6597777.HjAC5QEC5v@vbart-workstation> Message-ID: <72d1db7dc233819103a46c23413f549c.NginxMailingListEnglish@forum.nginx.org> Thank you Valentin! Today I have re-test the command and I waited for 3 minutes before command finishes I'm surprised I use google DNS but usually its very fast,i don't know where is wrong. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258721,259015#msg-259015 From nginx-forum at nginx.us Wed May 20 17:03:19 2015 From: nginx-forum at nginx.us (donatasm) Date: Wed, 20 May 2015 13:03:19 -0400 Subject: Simple timeout module Message-ID: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> I'm trying to build a simple timeout module using nginx timers. At the beginning of a request I'm firing up a timer and after time interval elapses I want to check if request has already completed, and if not, finalize it, for example it with NGX_HTTP_REQUEST_TIME_OUT. I have created a filter module. I'm creating a timer in filter headers: static ngx_int_t simple_timeout_filter_headers(ngx_http_request_t* request) { ngx_event_t* timeout_event; timeout_event = ngx_pcalloc(request->pool, sizeof(ngx_event_t)); if (timeout_event == NULL) { return NGX_ERROR; } timeout_event->handler = simple_timeout_handler; timeout_event->data = request; timeout_event->log = request->connection->log; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, request->connection->log, 0, "SIMPLE TIMEOUT TIMER START"); ngx_add_timer(timeout_event, 3000); /* wait for 3 seconds */ return next_header_filter(request); } Simple timeout handler looks like this: static void simple_timeout_handler(ngx_event_t* timeout_event) { ngx_log_debug0(NGX_LOG_DEBUG_HTTP, timeout_event->log, 0, "SIMPLE TIMEOUT TIMER END"); } And it works if I issue a request, and wait for a timer to fire. If I issue several requests while the previous timer is already in progress, I get a SEGFAULT. A SEGFAULT happens here, while inserting a node into rbtree: ngx_rbtree_insert() at ngx_rbtree.c:32 0x40d3c0 ngx_event_add_timer() at ngx_event_timer.h:84 0x42fc32 ngx_http_init_connection() at ngx_http_request.c:363 0x42fc32 ngx_event_accept() at ngx_event_accept.c:360 0x41c9ec ngx_epoll_process_events() at ngx_epoll_module.c:822 0x424a00 ngx_process_events_and_timers() at ngx_event.c:248 0x41bb2c ngx_single_process_cycle() at ngx_process_cycle.c:308 0x423c8b main() at nginx.c:416 0x403bc6 So what I'm missing in my simple_timeout_filter_headers method? Nginx version 1.8 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259019,259019#msg-259019 From nginx-forum at nginx.us Wed May 20 17:12:26 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Wed, 20 May 2015 13:12:26 -0400 Subject: Satistfy any not working as expected In-Reply-To: <20150519191645.GJ11860@mdounin.ru> References: <20150519191645.GJ11860@mdounin.ru> Message-ID: <9129b5ffcf1a53b601de0a96456a44cd.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Thanks again for your explanations, they make sense. So I've put back the "deny all;" statement. I get the 403 unauthorized message back. And there's indeed some good indication in the error log, showing that my auth_request script does the job, and then the login page returns the 403 status code. So I added an "allow all;" statement just on the login page which is the only one that needs to be reachable in any case. Let me paste a more real and complete example of my config (I hid some personal stuff), I hope this one makes sense: -------------------------------------- server { listen 443; server_name hidden; ssl on; ssl_certificate /hidden; ssl_certificate_key /hidden; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers 'AES256+EECDH:AES256+EDH'; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; root /var/www/hidden; index index.php index.html index.htm; satisfy any; allow 192.168.0.0/24; deny all; auth_request /twofactorauth/nginx/auth.php; error_page 401 = @error401; location @error401 { return 302 $scheme://$host/twofactorauth/login/login.php?from=$uri; } location / { try_files $uri $uri/ /index.html; } location = /twofactorauth/nginx/auth.php { fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi.conf; fastcgi_param CONTENT_LENGTH ""; } location = /twofactorauth/login/login.php { allow all; auth_request off; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi.conf; } [...] } -------------------------------------- See the "allow all;" statement under the login.php location ? This make everyhting work as I expect, but I hope i makes sense. Thanks and kind regards, Arno Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258955,259020#msg-259020 From mdounin at mdounin.ru Wed May 20 18:36:29 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 May 2015 21:36:29 +0300 Subject: Simple timeout module In-Reply-To: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> References: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150520183629.GO11860@mdounin.ru> Hello! On Wed, May 20, 2015 at 01:03:19PM -0400, donatasm wrote: > I'm trying to build a simple timeout module using nginx timers. At the > beginning of a request I'm firing up a timer and after time interval elapses > I want to check if request has already completed, and if not, finalize it, > for example it with NGX_HTTP_REQUEST_TIME_OUT. I have created a filter > module. I'm creating a timer in filter headers: > > static ngx_int_t simple_timeout_filter_headers(ngx_http_request_t* request) > { > ngx_event_t* timeout_event; > > timeout_event = ngx_pcalloc(request->pool, sizeof(ngx_event_t)); > if (timeout_event == NULL) > { > return NGX_ERROR; > } > > timeout_event->handler = simple_timeout_handler; > timeout_event->data = request; > timeout_event->log = request->connection->log; > > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, request->connection->log, 0, > "SIMPLE TIMEOUT TIMER START"); > > ngx_add_timer(timeout_event, 3000); /* wait for 3 seconds */ > > return next_header_filter(request); > } > > Simple timeout handler looks like this: > > static void simple_timeout_handler(ngx_event_t* timeout_event) > { > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, timeout_event->log, 0, > "SIMPLE TIMEOUT TIMER END"); > } > > And it works if I issue a request, and wait for a timer to fire. If I issue > several requests while the previous timer is already in progress, I get a > SEGFAULT. At least you don't remove the timer if the request completes before the timer is triggered. This is enough to trigger a segmentation fault. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed May 20 18:37:28 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 20 May 2015 21:37:28 +0300 Subject: Simple timeout module In-Reply-To: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> References: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5824780.LXEUCf0nnq@vbart-workstation> On Wednesday 20 May 2015 13:03:19 donatasm wrote: > I'm trying to build a simple timeout module using nginx timers. At the > beginning of a request I'm firing up a timer and after time interval elapses > I want to check if request has already completed, and if not, finalize it, > for example it with NGX_HTTP_REQUEST_TIME_OUT. I have created a filter > module. I'm creating a timer in filter headers: > > static ngx_int_t simple_timeout_filter_headers(ngx_http_request_t* request) > { > ngx_event_t* timeout_event; > > timeout_event = ngx_pcalloc(request->pool, sizeof(ngx_event_t)); > if (timeout_event == NULL) > { > return NGX_ERROR; > } > > timeout_event->handler = simple_timeout_handler; > timeout_event->data = request; > timeout_event->log = request->connection->log; > > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, request->connection->log, 0, > "SIMPLE TIMEOUT TIMER START"); > > ngx_add_timer(timeout_event, 3000); /* wait for 3 seconds */ > > return next_header_filter(request); > } > > Simple timeout handler looks like this: > > static void simple_timeout_handler(ngx_event_t* timeout_event) > { > ngx_log_debug0(NGX_LOG_DEBUG_HTTP, timeout_event->log, 0, > "SIMPLE TIMEOUT TIMER END"); > } > > And it works if I issue a request, and wait for a timer to fire. If I issue > several requests while the previous timer is already in progress, I get a > SEGFAULT. > > A SEGFAULT happens here, while inserting a node into rbtree: > > ngx_rbtree_insert() at ngx_rbtree.c:32 0x40d3c0 > ngx_event_add_timer() at ngx_event_timer.h:84 0x42fc32 > ngx_http_init_connection() at ngx_http_request.c:363 0x42fc32 > ngx_event_accept() at ngx_event_accept.c:360 0x41c9ec > ngx_epoll_process_events() at ngx_epoll_module.c:822 0x424a00 > ngx_process_events_and_timers() at ngx_event.c:248 0x41bb2c > ngx_single_process_cycle() at ngx_process_cycle.c:308 0x423c8b > main() at nginx.c:416 0x403bc6 > > > So what I'm missing in my simple_timeout_filter_headers method? > > Nginx version 1.8 > You have allocated timer from a request memory pool. After the request is completed, the pool is freed, but you timer is still in the tree. You should cleanup your timer. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Wed May 20 18:43:54 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 20 May 2015 21:43:54 +0300 Subject: Satistfy any not working as expected In-Reply-To: <9129b5ffcf1a53b601de0a96456a44cd.NginxMailingListEnglish@forum.nginx.org> References: <20150519191645.GJ11860@mdounin.ru> <9129b5ffcf1a53b601de0a96456a44cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150520184354.GP11860@mdounin.ru> Hello! On Wed, May 20, 2015 at 01:12:26PM -0400, Arno0x0x wrote: > Thanks again for your explanations, they make sense. So I've put back the > "deny all;" statement. I get the 403 unauthorized message back. And there's > indeed some good indication in the error log, showing that my auth_request > script does the job, and then the login page returns the 403 status code. > > So I added an "allow all;" statement just on the login page which is the > only one that needs to be reachable in any case. > > Let me paste a more real and complete example of my config (I hid some > personal stuff), I hope this one makes sense: [...] > location = /twofactorauth/login/login.php { > allow all; > auth_request off; [...] > See the "allow all;" statement under the login.php location ? This make > everyhting work as I expect, but I hope i makes sense. Yes, this looks correct. Obviously enough you shouldn't restrict access to the login page, and 403 is perfectly explained by the fact that previously it was restricted due to "deny all;" at server{} level. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed May 20 19:00:45 2015 From: nginx-forum at nginx.us (Arno0x0x) Date: Wed, 20 May 2015 15:00:45 -0400 Subject: Satistfy any not working as expected In-Reply-To: <20150520184354.GP11860@mdounin.ru> References: <20150520184354.GP11860@mdounin.ru> Message-ID: <238773ef5687d66ef082b42bd70995c7.NginxMailingListEnglish@forum.nginx.org> Thank you ! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258955,259026#msg-259026 From florin at andrei.myip.org Wed May 20 20:57:22 2015 From: florin at andrei.myip.org (Florin Andrei) Date: Wed, 20 May 2015 13:57:22 -0700 Subject: is it possible to use multiple sub_filter in one location? Message-ID: Trying to do this: location /whatever/ { proxy_buffering off; proxy_pass http://11.22.33.44:5555; sub_filter '"df":"https://df-foo"' '"df":"https://df-bar"'; sub_filter '"pr":"https://pr-foo"' '"pr":"https://pr-bar"'; sub_filter_once off; sub_filter_types *; } But I'm getting this: nginx: [emerg] "sub_filter" directive is duplicate in ... How do I replace multiple things in one location? Thanks. -- Florin Andrei http://florin.myip.org/ From emailbuilder88 at yahoo.com Wed May 20 23:48:58 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Wed, 20 May 2015 16:48:58 -0700 Subject: Case insensitive exact location match? In-Reply-To: <220e9c84ab5a7d3595c5d6b5389b6172.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1432165738.43260.YahooMailBasic@web142402.mail.bf1.yahoo.com> > > 2^4 = all 16 combinations. Hopefully the 'test' sting is only 4 > > characters > > long... > > > > Case-insensitiveness is no-trivial check, and if needed PCRE are here > > for > > that through regex locations. > > What is wrong with them already?? > > At assembly level alot, +-20 bytes including an if or alot more when it has > to go through pcre. Exactly. PCRE engine adds overhead and is slower than just case folding. PLUS the Nginx location directive adds more overhead I'd like to avoid because of its processing order. I know the exact text of what I want to match, so = is the best choice. But I can't control the case of what clients use. But answers seem to indicate Nginx has no such feature. From florin at andrei.myip.org Thu May 21 01:51:47 2015 From: florin at andrei.myip.org (Florin Andrei) Date: Wed, 20 May 2015 18:51:47 -0700 Subject: is it possible to use multiple sub_filter in one location? In-Reply-To: References: Message-ID: I've solved it by recompiling Nginx to include the nginx_substitutions_filter module: https://github.com/yaoweibin/ngx_http_substitutions_filter_module This module allows multiple subs_filter statements per location. Seems to work just fine in my tests. -- Florin Andrei http://florin.myip.org/ From jeppe at falconsocial.com Thu May 21 08:21:13 2015 From: jeppe at falconsocial.com (Jeppe Toustrup) Date: Thu, 21 May 2015 10:21:13 +0200 Subject: Stable Ubuntu PPA Message-ID: Hi I have recently upgraded our fleet of Ubuntu servers to nginx 1.8.0 via the nginx/stable PPA[1]. After the upgrade we are seeing two - probably related - problems. They are: 1. When the log files are rotated by logrotate(8) nginx doesn't start writing to the new log files until it's reloaded, restarted or has been "/etc/init.d/nginx rotate"'ed. 2. The ulimit setting for number of open files specified inside /etc/default/nginx no longer takes effect. The first bug I debugged being down to the "invoke-rc.d nginx rotate >/dev/null 2>&1" command being ran after the log is rotated. I tried running the command manually and the output doesn't look very promising: $ sudo invoke-rc.d nginx rotate initctl: invalid command: rotate Try `initctl --help' for more information. invoke-rc.d: initscript nginx, action "rotate" failed. I worked around that issue with replacing "invoke-rc.d" inside /etc/logrotate.d/nginx with "service", and since then the logs has been rotated correctly. When debugging the second problem, I found out that the 1.8.0 version of the nginx-common package - besides the init.d script - also includes an Upstart configuration. I don't know why both types of service scripts are included in the configuration, because it introduces a couple of problems: - On boot up it's a race between if nginx is started by Upstart or by the init.d script - Upstart seems to win most of the times from what I have seen. - The Upstart configuration doesn't support the "rotate" call, which is causing invoke-rc.d to fail in the log rotation configuration, since it prefers to call an Upstart service rather than an init.d script. - /etc/default/nginx is no longer used by the Upstart configuration, possibly causing big problems for people (like me) who had to increase the limit of number of files descriptors nginx is allowed to have open. As a test I started up an EC2 machine based on an image which has the nginx-common 1.6.3-1+trusty0 package installed. The machine only has an init.d script for nginx and thus nginx is started through that. I then upgraded nginx on the machine and got it to nginx-common 1.8.0-1+trusty1. After the upgrade nginx was still started through the init.d script, and thus any changes in /etc/default/nginx would have been applied. Log rotation will however not work at this stage, since the Upstart script means invoke-rc.d in the log rotation configuration will fail to send a USR1 signal to nginx. The frightening thing is then, when the machine is rebooted and nginx the next time is started by Upstart, the changes make to /etc/default/nginx no longer applies, but it can be difficult to spot as nginx otherwise start up normally. I think this is a pretty bad problem and something that should be fixed sooner rather than later. I don't really care if we go for Upstart or init.d, as long as there's only one way to do it and there aren't files included with the package that may lead you to think it works in another way. [1]: https://launchpad.net/~nginx/+archive/ubuntu/stable -- Jeppe Toustrup From nginx-forum at nginx.us Thu May 21 08:41:29 2015 From: nginx-forum at nginx.us (donatasm) Date: Thu, 21 May 2015 04:41:29 -0400 Subject: Simple timeout module In-Reply-To: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> References: <7b7f5b713d97001dbe0455f3e3f4a2f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <566de3c63ec4cbf2dbe924454449bfa4.NginxMailingListEnglish@forum.nginx.org> Thanks, it worked as I added ngx_del_timer call in ngx_pool_cleanup_t handler of a request Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259019,259034#msg-259034 From tanfenfen at banggood.cn Thu May 21 12:16:18 2015 From: tanfenfen at banggood.cn (=?utf-8?B?6LCt6Iqs6Iqs?=) Date: Thu, 21 May 2015 20:16:18 +0800 Subject: a problem while using nginx1.4.6 and nginx1.8.0 for web service Message-ID: Hello, This is Amily. I meet a problem while using nginx1.4.6 and nginx1.8.0 for web service. The follow is that the content shows on the /var/log/messages: May 21 01:55:51 web3 kernel: nginx[16176] general protection ip:3acd289713 sp:7fff98d6aba8 error:0 in libc-2.12.so[3acd200000+18a000] May 21 01:56:01 web3 kernel: nginx[20064] general protection ip:3acd289713 sp:7fff98d6ac08 error:0 in libc-2.12.so[3acd200000+18a000] May 21 01:56:02 web3 kernel: nginx[16174] general protection ip:3acd289713 sp:7fff98d6aba8 error:0 in libc-2.12.so[3acd200000+18a000] May 21 01:56:07 web3 kernel: nginx[16195] general protection ip:3acd279753 sp:7fff98d6ac20 error:0 in libc-2.12.so[3acd200000+18a000] May 21 01:56:07 web3 kernel: nginx[5326] general protection ip:3acd289713 sp:7fff98d6ac08 error:0 in libc-2.12.so[3acd200000+18a000] May 21 02:49:41 web3 kernel: nginx[16192] general protection ip:4129bc sp:7fff98d6ada0 error:0 in nginx[400000+9c000] May 21 02:49:42 web3 kernel: nginx[16188]: segfault at 0 ip 0000003acd27b55c sp 00007fff98d69d88 error 4 in libc-2.12.so[3acd200000+18a000] May 21 02:49:48 web3 kernel: nginx[712] general protection ip:3acd289713 sp:7fff98d6ac08 error:0 in libc-2.12.so[3acd200000+18a000] May 21 02:50:19 web3 kernel: nginx[2847] general protection ip:3acd289713 sp:7fff98d6ac08 error:0 in libc-2.12.so[3acd200000+18a000] May 21 02:50:19 web3 kernel: nginx[2872] general protection ip:3acd279753 sp:7fff98d6ac80 error:0 in libc-2.12.so[3acd200000+18a000] May 21 02:50:34 web3 kernel: nginx[3494] general protection ip:4129bc sp:7fff98d6adc0 error:0 in nginx[400000+9c000] May 21 02:50:34 web3 kernel: nginx[18681] general protection ip:3acd289713 sp:7fff98d6ac08 error:0 in libc-2.12.so[3acd200000+18a000] May 21 04:47:25 web3 kernel: nginx[10306] general protection ip:4129bc sp:7fff98d6ada0 error:0 in nginx[400000+9c000] And the follow content is the error showing in the nginx.error.log: *** glibc detected *** nginx: worker process: free(): invalid next size (normal): 0x0000000000c9e9d0 *** *** glibc detected *** nginx: worker process: munmap_chunk(): invalid pointer: 0x0000000001568f80 *** *** glibc detected *** nginx: worker process: corrupted double-linked list: 0x000000000173cf80 *** *** glibc detected *** nginx: worker process: double free or corruption (!prev): 0x0000000001a66990 *** ======= Backtrace: ========= /lib64/libc.so.6[0x3acd275e66] /lib64/libc.so.6[0x3acd2789b3] nginx: worker process(ngx_destroy_pool+0xdb)[0x40dec1] nginx: worker process(ngx_http_free_request+0x1aa)[0x436934] nginx: worker process[0x4369df] nginx: worker process[0x437b92] nginx: worker process(ngx_event_process_posted+0x87)[0x422de9] nginx: worker process(ngx_process_events_and_timers+0x166)[0x422c0b] nginx: worker process[0x4299d0] nginx: worker process(ngx_spawn_process+0x485)[0x42815c] nginx: worker process(ngx_master_process_cycle+0x566)[0x42a3e9] nginx: worker process(main+0xa0b)[0x40d2cd] /lib64/libc.so.6(__libc_start_main+0xfd)[0x3acd21ed5d] nginx: worker process[0x40ba99] ======= Memory map: ======== 00400000-00491000 r-xp 00000000 08:03 1182281 /opt/nginx/sbin/nginx 00691000-006a0000 rw-p 00091000 08:03 1182281 /opt/nginx/sbin/nginx 006a0000-006af000 rw-p 00000000 00:00 0 01373000-014c4000 rw-p 00000000 00:00 0 014c4000-01ed8000 rw-p 00000000 00:00 0 3acce00000-3acce20000 r-xp 00000000 08:03 204694 /lib64/ld-2.12.so 3acd01f000-3acd020000 r--p 0001f000 08:03 204694 /lib64/ld-2.12.so 3acd020000-3acd021000 rw-p 00020000 08:03 204694 /lib64/ld-2.12.so The attach file is the details about nginx error log while the website is running . Please help me to make sure whether it is something wrong about nginx. I will be appreciated with your generous help. Thanks Best Regards, Amliy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx_error.log Type: application/octet-stream Size: 315784 bytes Desc: not available URL: From nginx-forum at nginx.us Thu May 21 12:22:43 2015 From: nginx-forum at nginx.us (donatasm) Date: Thu, 21 May 2015 08:22:43 -0400 Subject: Best place for metrics module Message-ID: I want to write my custom nginx module for measuring request processing time for every HTTP request received. I need to start counting request processing time as early as nginx receives first byte of a request and finish, when it sends the last byte in a response. What would the best place be in nginx request processing pipeline to plug in my module and these measuring hooks? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259040,259040#msg-259040 From vbart at nginx.com Thu May 21 12:29:42 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 21 May 2015 15:29:42 +0300 Subject: a problem while using nginx1.4.6 and nginx1.8.0 for web service In-Reply-To: References: Message-ID: <12244100.XDK5JhTOTH@vbart-workstation> On Thursday 21 May 2015 20:16:18 ??? wrote: > Hello, > > > This is Amily. > I meet a problem while using nginx1.4.6 and nginx1.8.0 for web service. > [..] Could you reproduce the problem without 3rd-party modules or patches? wbr, Valentin V. Bartenev From mdounin at mdounin.ru Thu May 21 13:02:06 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 May 2015 16:02:06 +0300 Subject: Best place for metrics module In-Reply-To: References: Message-ID: <20150521130206.GQ11860@mdounin.ru> Hello! On Thu, May 21, 2015 at 08:22:43AM -0400, donatasm wrote: > I want to write my custom nginx module for measuring request processing time > for every HTTP request received. I need to start counting request processing > time as early as nginx receives first byte of a request and finish, when it > sends the last byte in a response. What would the best place be in nginx > request processing pipeline to plug in my module and these measuring hooks? The $request_time variable looks like what you need, see http://nginx.org/r/$request_time. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 21 14:19:32 2015 From: nginx-forum at nginx.us (donatasm) Date: Thu, 21 May 2015 10:19:32 -0400 Subject: Cancel ongoing ngx_http_request_t Message-ID: <95e3cd89415bca23f51d7f4cdf5b7bd4.NginxMailingListEnglish@forum.nginx.org> So I'm continuing to work on my simple timeout module: http://forum.nginx.org/read.php?2,259019 Now, when timer elapses, I want to cancel ongoing http request, here's my timer handler: static void simple_timeout_handler(ngx_event_t* timeout_event) { ngx_http_request_t* request = (ngx_http_request_t*)timeout_event->data; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, timeout_event->log, 0, "SIMPLE TIMEOUT TIMER END"); ngx_http_finalize_request(request, NGX_HTTP_REQUEST_TIME_OUT); // this does not seem to work, // as request stays in progress } This does not work as request still continues to be in progress. Is it possible to terminate it immediately, and return http timeout response? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259045,259045#msg-259045 From nginx-forum at nginx.us Thu May 21 16:14:36 2015 From: nginx-forum at nginx.us (s_n) Date: Thu, 21 May 2015 12:14:36 -0400 Subject: ip_hash in active_active nginx setup Message-ID: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> Hi all, we want to use a F5 loadbalancer in front of two nginx instances, which balance the load to our app-nodes. The F5 loadbalancer distributes incoming requests via round-robin algorithm to the two nginx instances. The nginx instances should be configured to use ip_hash to distribute the requests to the addnodes ensuring sticky sessions. Specific customer requirements make the use of cookies to ensure sticky sessions impossible. I understand, that all nginx instances use the same function to hash the IP adress. But i am not sure if the distribution of the hash values to the appnodes happens in a consistent way. I came across the following discussion on github which indicates that the same IP address may be forwarded to different appnodes in the scenario described above: http://serverfault.com/questions/511763/do-multiple-nginx-servers-load-balance-the-same-ip-address-to-the-same-backend-w Long story short / tl;dr: Will different nginx instances forward requests from the same IP always to the same appnode when using ip_hash as a load balancing method? Thanks, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259046,259046#msg-259046 From mdounin at mdounin.ru Thu May 21 18:32:43 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 21 May 2015 21:32:43 +0300 Subject: ip_hash in active_active nginx setup In-Reply-To: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> References: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150521183243.GX11860@mdounin.ru> Hello! On Thu, May 21, 2015 at 12:14:36PM -0400, s_n wrote: > Hi all, > > we want to use a F5 loadbalancer in front of two nginx instances, which > balance the load to our app-nodes. > > The F5 loadbalancer distributes incoming requests via round-robin algorithm > to the two nginx instances. > > The nginx instances should be configured to use ip_hash to distribute the > requests to the addnodes ensuring sticky sessions. Specific customer > requirements make the use of cookies to ensure sticky sessions impossible. > > I understand, that all nginx instances use the same function to hash the IP > adress. But i am not sure if the distribution of the hash values to the > appnodes happens in a consistent way. > > I came across the following discussion on github which indicates that the > same IP address may be forwarded to different appnodes in the scenario > described above: > > http://serverfault.com/questions/511763/do-multiple-nginx-servers-load-balance-the-same-ip-address-to-the-same-backend-w > > Long story short / tl;dr: > > Will different nginx instances forward requests from the same IP always to > the same appnode when using ip_hash as a load balancing method? As long as you use identical lists of servers in the upstream{} blocks on your nginx instances, balancing on instances should be identical. Note though, that if an upstream server is considered down due to errors, nginx will re-route requests to other servers, see http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails. And since different instances (and even different worker processes) may see different errors, this re-routing may be different. That is, ip_hash balancing method does not guarantee that all requests from a specific IP address will be routed to a given upstream server. Rather, it's a method to minimize (but not to eliminate) migration of users between upstream servers. Obviously enough, it's not possible to completely eliminate migration as long as upstream servers may fail. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 21 19:02:34 2015 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 21 May 2015 15:02:34 -0400 Subject: ip_hash in active_active nginx setup In-Reply-To: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> References: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: s_n Wrote: ------------------------------------------------------- > Hi all, > > we want to use a F5 loadbalancer in front of two nginx instances, > which balance the load to our app-nodes. Why not cut out the F5, use stream to balance between the same instance(s) (loop back) with either a single server block and loadbalanced upstream or split up, which ever works best / more consistent in load distribution. nginx is by default a better loadbalancer then a F5, but that's a personal opinion. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259046,259049#msg-259049 From nginx-forum at nginx.us Thu May 21 22:52:25 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Thu, 21 May 2015 18:52:25 -0400 Subject: DNS configuration to invoke complete URL In-Reply-To: <9c755668b4549afefa942f2e9c8b86c6.NginxMailingListEnglish@forum.nginx.org> References: <9c755668b4549afefa942f2e9c8b86c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Can somebody answer my query pls.? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,259050#msg-259050 From vikrant.thakur at gmail.com Thu May 21 23:37:32 2015 From: vikrant.thakur at gmail.com (vikrant singh) Date: Thu, 21 May 2015 16:37:32 -0700 Subject: DNS configuration to invoke complete URL In-Reply-To: References: <9c755668b4549afefa942f2e9c8b86c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: First thing I would have done is to check max open file limit in system where I am getting it error. YOu can check that by "ulimit -a" command. If it is very low then increasing it might fix it. Alternatively you can try playing with config NGINX http://wiki.nginx.org/CoreModule#worker_rlimit_nofile On Thu, May 21, 2015 at 3:52 PM, smsmaddy1981 wrote: > Can somebody answer my query pls.? > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258982,259050#msg-259050 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 22 01:45:32 2015 From: nginx-forum at nginx.us (tdubs) Date: Thu, 21 May 2015 21:45:32 -0400 Subject: htaccess to nginx conversion? Message-ID: <2bc4c568e0c7b6f20379d3d47a964248.NginxMailingListEnglish@forum.nginx.org> Hello, I am working with a panel in which requires the pages to function properly from the .htaccess file. However, I am no longer using Apache and have decided to do a Centmin Mod LEMP web stack to improve performance and whatnot. Anyways, I used this online converter http://winginx.com/en/htaccess and when I put the code it gave me into my vhost.conf file, I got an error (see below). The error occurs even when I place that code in my domain.com.conf file, vhost.conf file, or nginx.conf file. The URL types my panel uses are http://domain.com/staff/core.staffDetails and it's throwing a 404 error simply because it's not reading the .htaccess file since I'm using Nginx. The error I'm receiving: -- [root at radio ~]# ngxrestart nginx: [emerg] "location" directive is not allowed here in /usr/local/nginx/conf/conf.d/virtual.conf:50 -- Again, that happens even when I try the other files listed above. .htaccess code I need converted: -- Options -Indexes # Various rewrite rules. RewriteEngine on RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/favicon.ico RewriteRule ^(.*)(\.)(.*)$ index.php?url=$1.$3 [L,QSA] RewriteRule ^ajax$ _res/ajax.php [QSA] #RewriteRule ^(.*)$ index.php?t=$1 [L,QSA] --- I am using a Linux VPS running CentOS 6.5 if that information is needed. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259054,259054#msg-259054 From tanfenfen at banggood.cn Fri May 22 01:45:52 2015 From: tanfenfen at banggood.cn (tanfenfen at banggood.cn) Date: Fri, 22 May 2015 09:45:52 +0800 Subject: a problem while using nginx1.4.6 and nginx1.8.0 for web service References: , <12244100.XDK5JhTOTH@vbart-workstation> Message-ID: <2015052209455218417221@banggood.cn> On Thursday 21 May 2015 20:16:18 ??? wrote: > Hello, > > > This is Amily. > I meet a problem while using nginx1.4.6 and nginx1.8.0 for web service. > [..] ? Could you reproduce the problem without 3rd-party modules or patches? ? >We do not use?3rd-party modules or patches.We use ?configure?arguments of nginx?like this:configure arguments: --prefix=/opt/nginx --http-client-body-temp-path=/opt/nginx/client/ --http-proxy-temp-path=/opt/nginx/proxy/ --http-fastcgi-temp-path=/opt/nginx/fcgi/ --with-file-aio --with-http_realip_module --with-http_geoip_module --with-http_sub_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --without-http_uwsgi_module --without-http_scgi_module --with-http_perl_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-pcre --with-http_ssl_module And the configure arguments of php5.3.28 like this:Configure Command => './configure' '--prefix=/opt/php' '--enable-fpm' '--with-config-file-path=/opt/php/etc/' '--with-libxml-dir' '--with-openssl=/usr' '--with-zlib' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-freetype-dir' '--enable-gd-native-ttf' '--enable-gd-jis-conv' '--with-mhash' '--enable-mbstring' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pdo-mysql=mysqlnd' '--enable-sockets' '--with-curl' '--enable-sysvmsg' '--enable-sysvsem' '--enable-sysvshm' '--with-pear' '--enable-intl' '--enable-soap' '--enable-wddx' -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri May 22 12:46:13 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 22 May 2015 15:46:13 +0300 Subject: a problem while using nginx1.4.6 and nginx1.8.0 for web service In-Reply-To: <2015052209455218417221@banggood.cn> References: <12244100.XDK5JhTOTH@vbart-workstation> <2015052209455218417221@banggood.cn> Message-ID: <1689907.7CJgsetDpL@vbart-workstation> On Friday 22 May 2015 09:45:52 tanfenfen at banggood.cn wrote: > > On Thursday 21 May 2015 20:16:18 ??? wrote: > > Hello, > > > > > > This is Amily. > > I meet a problem while using nginx1.4.6 and nginx1.8.0 for web service. > > > [..] > ? > Could you reproduce the problem without 3rd-party modules or patches? > ? > >We do not use?3rd-party modules or patches.We use ?configure?arguments of nginx?like this:configure arguments: --prefix=/opt/nginx --http-client-body-temp-path=/opt/nginx/client/ --http-proxy-temp-path=/opt/nginx/proxy/ --http-fastcgi-temp-path=/opt/nginx/fcgi/ --with-file-aio --with-http_realip_module --with-http_geoip_module --with-http_sub_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --without-http_uwsgi_module --without-http_scgi_module --with-http_perl_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-pcre --with-http_ssl_module > And the configure arguments of php5.3.28 like this:Configure Command => './configure' '--prefix=/opt/php' '--enable-fpm' '--with-config-file-path=/opt/php/etc/' '--with-libxml-dir' '--with-openssl=/usr' '--with-zlib' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-freetype-dir' '--enable-gd-native-ttf' '--enable-gd-jis-conv' '--with-mhash' '--enable-mbstring' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pdo-mysql=mysqlnd' '--enable-sockets' '--with-curl' '--enable-sysvmsg' '--enable-sysvsem' '--enable-sysvshm' '--with-pear' '--enable-intl' '--enable-soap' '--enable-wddx' > Ok, then please provide a debug log and your configuration to reproduce the issue. See instructions here: http://nginx.org/en/docs/debugging_log.html wbr, Valentin V. Bartenev From vbart at nginx.com Fri May 22 13:24:28 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 22 May 2015 16:24:28 +0300 Subject: DNS configuration to invoke complete URL In-Reply-To: References: <9c755668b4549afefa942f2e9c8b86c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <25579669.Tjf99cA4CD@vbart-workstation> On Thursday 21 May 2015 18:52:25 smsmaddy1981 wrote: > Can somebody answer my query pls.? > You're probably right about the proxying loop, but there's no way to answer your question until you provide some information. It's always a good idea to start any question with providing output of "nginx -V", your configuration, and an explanation of what you're doing, what you actually see, what you expect to see. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Fri May 22 13:25:41 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 22 May 2015 16:25:41 +0300 Subject: a problem while using nginx1.4.6 and nginx1.8.0 for web service In-Reply-To: <1689907.7CJgsetDpL@vbart-workstation> References: <12244100.XDK5JhTOTH@vbart-workstation> <2015052209455218417221@banggood.cn> <1689907.7CJgsetDpL@vbart-workstation> Message-ID: <20150522132541.GZ11860@mdounin.ru> Hello! On Fri, May 22, 2015 at 03:46:13PM +0300, Valentin V. Bartenev wrote: > On Friday 22 May 2015 09:45:52 tanfenfen at banggood.cn wrote: > > > > On Thursday 21 May 2015 20:16:18 ??? wrote: > > > Hello, > > > > > > > > > This is Amily. > > > I meet a problem while using nginx1.4.6 and nginx1.8.0 for web service. > > > > > [..] > > ? > > Could you reproduce the problem without 3rd-party modules or patches? > > ? > > >We do not use?3rd-party modules or patches.We use ?configure?arguments of nginx?like this:configure arguments: --prefix=/opt/nginx --http-client-body-temp-path=/opt/nginx/client/ --http-proxy-temp-path=/opt/nginx/proxy/ --http-fastcgi-temp-path=/opt/nginx/fcgi/ --with-file-aio --with-http_realip_module --with-http_geoip_module --with-http_sub_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_stub_status_module --without-http_uwsgi_module --without-http_scgi_module --with-http_perl_module --without-mail_pop3_module --without-mail_imap_module --without-mail_smtp_module --with-pcre --with-http_ssl_module > > And the configure arguments of php5.3.28 like this:Configure Command => './configure' '--prefix=/opt/php' '--enable-fpm' '--with-config-file-path=/opt/php/etc/' '--with-libxml-dir' '--with-openssl=/usr' '--with-zlib' '--with-gd' '--with-jpeg-dir' '--with-png-dir' '--with-freetype-dir' '--enable-gd-native-ttf' '--enable-gd-jis-conv' '--with-mhash' '--enable-mbstring' '--with-mysql=mysqlnd' '--with-mysqli=mysqlnd' '--with-pdo-mysql=mysqlnd' '--enable-sockets' '--with-curl' '--enable-sysvmsg' '--enable-sysvsem' '--enable-sysvshm' '--with-pear' '--enable-intl' '--enable-soap' '--enable-wddx' > > > > Ok, then please provide a debug log and your configuration to reproduce the issue. > > See instructions here: http://nginx.org/en/docs/debugging_log.html I would also recommend the following (in no particular order): - Take a look at http://wiki.nginx.org/Debugging for some basic hints. In particular, full config will likely be needed to analyze the problem. - I see there are perl and libgeoip libraries are loaded into nginx. Both can produce arbitrary problems - in particular, libgeoip is known to do wierd things with corrupted databases; and perl modules can do anything. It's good idea to make sure the problem can be reproduced without perl and geoip (either without these modules compiled in, or at least without them being used in the configuration). -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri May 22 13:34:44 2015 From: nginx-forum at nginx.us (donatasm) Date: Fri, 22 May 2015 09:34:44 -0400 Subject: Cancel ongoing ngx_http_request_t In-Reply-To: <95e3cd89415bca23f51d7f4cdf5b7bd4.NginxMailingListEnglish@forum.nginx.org> References: <95e3cd89415bca23f51d7f4cdf5b7bd4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9bb766d5c550cf78578f684136c4607a.NginxMailingListEnglish@forum.nginx.org> Ok, so I managed this to work, needed to add ngx_http_send_header and ngx_http_finalize_request: static void simple_timeout_handler(ngx_event_t* timeout_event) { ngx_int_t result; ngx_http_request_t* request; ngx_log_debug0(NGX_LOG_DEBUG_HTTP, timeout_event->log, 0, "SIMPLE TIMEOUT TIMER END"); request = (ngx_http_request_t*)timeout_event->data; result = ngx_http_send_header(request); ngx_http_finalize_request(request, result); } Everything work fine, but during the load test, after some time, a SEGFAULT occurs: Thread #1 [nginx] 3352 [core: 7] (Suspended : Signal : SIGSEGV:Segmentation fault) ngx_destroy_pool() at ngx_palloc.c:51 0x405187 ngx_http_free_request() at ngx_http_request.c:3,499 0x42f04b ngx_http_set_keepalive() at ngx_http_request.c:2,901 0x4300d1 ngx_http_finalize_connection() at ngx_http_request.c:2,538 0x4300d1 ngx_http_finalize_request() at ngx_http_request.c:2,434 0x430e41 simple_timeout_handler() at simple_timeout_module.c:135 0x44a8ef ngx_event_expire_timers() at ngx_event_timer.c:94 0x41bf5a ngx_process_events_and_timers() at ngx_event.c:262 0x41bb86 ngx_single_process_cycle() at ngx_process_cycle.c:308 0x423c8b main() at nginx.c:416 0x403bc6 It seems, that pool argument passed to ngx_destroy_pool is not initialized, any ideas why this could happen? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259045,259061#msg-259061 From emailgrant at gmail.com Sat May 23 15:19:55 2015 From: emailgrant at gmail.com (Grant) Date: Sat, 23 May 2015 08:19:55 -0700 Subject: ssl_dhparam compatibility issues? Message-ID: I'm using Mozilla's "Old backward compatibility" ssl_ciphers so I feel good about my compatibility there, but does the following open me up to potential compatibility problems: # openssl dhparam -out dhparams.pem 2048 nginx.conf: ssl_dhparam {path to dhparams.pem} https://wiki.mozilla.org/Security/Server_Side_TLS - Grant From julien at linuxwall.info Sat May 23 15:25:26 2015 From: julien at linuxwall.info (Julien Vehent) Date: Sat, 23 May 2015 11:25:26 -0400 Subject: ssl_dhparam compatibility issues? In-Reply-To: References: Message-ID: <9b81c28d5984991d5b4805233f5d31ca@webmail.linuxwall.info> On 2015-05-23 11:19, Grant wrote: > I'm using Mozilla's "Old backward compatibility" ssl_ciphers so I > feel > good about my compatibility there, but does the following open me up > to potential compatibility problems: > > # openssl dhparam -out dhparams.pem 2048 DHE params larger than 1024 bits are not compatible with java 6/7 clients. If you need compatibility with those clients, use a DHE of 1024 bits, or disable DHE entirely. - Julien From emailgrant at gmail.com Sat May 23 15:39:38 2015 From: emailgrant at gmail.com (Grant) Date: Sat, 23 May 2015 08:39:38 -0700 Subject: ssl_dhparam compatibility issues? In-Reply-To: <9b81c28d5984991d5b4805233f5d31ca@webmail.linuxwall.info> References: <9b81c28d5984991d5b4805233f5d31ca@webmail.linuxwall.info> Message-ID: >> I'm using Mozilla's "Old backward compatibility" ssl_ciphers so I feel >> good about my compatibility there, but does the following open me up >> to potential compatibility problems: >> >> # openssl dhparam -out dhparams.pem 2048 > > > DHE params larger than 1024 bits are not compatible with java 6/7 clients. > If you need compatibility with those clients, use a DHE of 1024 bits, or > disable DHE entirely. My server is open to the internet so I'd like to maintain compatibility with as many clients as possible, but I don't serve any java apps. Given that, will DHE params larger than 1024 bits affect my compatibility? If so, I believe a DHE of 1024 bits opens me to the LogJam attack, so if I disable DHE entirely will that affect my compatibility? - Grant From nginx-forum at nginx.us Sat May 23 17:56:32 2015 From: nginx-forum at nginx.us (pierob83) Date: Sat, 23 May 2015 13:56:32 -0400 Subject: encoded url Message-ID: Hello everyone, I've nginx 1.2.5 and I've an issue about encoded URLs. Is there a way to make nginx accepts URL like the following: http://www.mywebsite.com/image.php%3Fid%3D12345 as equivalent of the following? http://www.mywebsite.com/image.php?id=12345 In my current configuration, the first URL is 404 not found, and the second one works. Many thanks PB Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259076,259076#msg-259076 From nginx-forum at nginx.us Sat May 23 19:17:55 2015 From: nginx-forum at nginx.us (escavern) Date: Sat, 23 May 2015 15:17:55 -0400 Subject: can somebody help me to rewrite this? Message-ID: <8dde479d3aa05939b20b352a7babab1d.NginxMailingListEnglish@forum.nginx.org> Hi guys, i really need some help here. I plan to move my forum from the root domain to a subfolder named "/forum" I need to rewrite from: www.mywebsite.com/showthread.php?t=123456 To www.mywebsite.com/forum/showthread.php?t=123456 I hope you guys can help me to find out the rewrite rules. Thank you! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259077,259077#msg-259077 From rpaprocki at fearnothingproductions.net Sat May 23 19:53:54 2015 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 23 May 2015 12:53:54 -0700 Subject: ssl_dhparam compatibility issues? In-Reply-To: References: <9b81c28d5984991d5b4805233f5d31ca@webmail.linuxwall.info> Message-ID: <9D4BD289-CC29-45C3-A711-26F86E47973F@fearnothingproductions.net> You're entirely misunderstanding logjam. The actual logjam attack refers to a flaw in the tls protocol that would allow mitm attackers to downgrade a connection to an export cipher. This is only possible if your server supports export-grade ciphers, which it should not if you're following mozillas guide. Using a 1024 bit dh param does not "open you" to any attack. According to the authors of the freak/logjam disclosure, use of a common 1024 bit dh param potentially allows for threats from nation-state adversaries. If you've pissed off the NSA, forget about legacy comparability with java nonsense and use a custom 2048 (or higher) param. If you're paranoid about supporting grandmas java app, stick with the default. On May 23, 2015, at 8:39, Grant wrote: >>> I'm using Mozilla's "Old backward compatibility" ssl_ciphers so I feel >>> good about my compatibility there, but does the following open me up >>> to potential compatibility problems: >>> >>> # openssl dhparam -out dhparams.pem 2048 >> >> >> DHE params larger than 1024 bits are not compatible with java 6/7 clients. >> If you need compatibility with those clients, use a DHE of 1024 bits, or >> disable DHE entirely. > > > My server is open to the internet so I'd like to maintain > compatibility with as many clients as possible, but I don't serve any > java apps. Given that, will DHE params larger than 1024 bits affect > my compatibility? > > If so, I believe a DHE of 1024 bits opens me to the LogJam attack, so > if I disable DHE entirely will that affect my compatibility? > > - Grant > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From jujj603 at gmail.com Sun May 24 00:08:55 2015 From: jujj603 at gmail.com (J.J J) Date: Sun, 24 May 2015 08:08:55 +0800 Subject: encoded url In-Reply-To: References: Message-ID: For http://www.mywebsite.com/image.php%3Fid%3D12345, the uri is "/image.php?id=12345" after decoded, so nginx may try to find file with name "image.php?id=12345" based on your config file. That's why you got 404. For http://www.mywebsite.com/image.php?id=12345, the uri is "/image.php" and argument is "id=12345". If you want a uri with argument, the '?' should not be encoded, check this URI = scheme ":" hier-part [ "?" query ] [ "#" fragment ] On Sun, May 24, 2015 at 1:56 AM, pierob83 wrote: > Hello everyone, > I've nginx 1.2.5 and I've an issue about encoded URLs. > Is there a way to make nginx accepts URL like the following: > > > http://www.mywebsite.com/image.php%3Fid%3D12345 > > as equivalent of the following? > > http://www.mywebsite.com/image.php?id=12345 > > > > In my current configuration, the first URL is 404 not found, and the second > one works. > > Many thanks > > PB > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259076,259076#msg-259076 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jujj603 at gmail.com Sun May 24 00:12:32 2015 From: jujj603 at gmail.com (J.J J) Date: Sun, 24 May 2015 08:12:32 +0800 Subject: can somebody help me to rewrite this? In-Reply-To: <8dde479d3aa05939b20b352a7babab1d.NginxMailingListEnglish@forum.nginx.org> References: <8dde479d3aa05939b20b352a7babab1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Check this out: http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite On Sun, May 24, 2015 at 3:17 AM, escavern wrote: > Hi guys, i really need some help here. I plan to move my forum from the > root > domain to a subfolder named "/forum" > > I need to rewrite from: > www.mywebsite.com/showthread.php?t=123456 > > To > > www.mywebsite.com/forum/showthread.php?t=123456 > > I hope you guys can help me to find out the rewrite rules. > Thank you! > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259077,259077#msg-259077 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jujj603 at gmail.com Sun May 24 00:46:16 2015 From: jujj603 at gmail.com (J.J J) Date: Sun, 24 May 2015 08:46:16 +0800 Subject: Case insensitive exact location match? In-Reply-To: References: Message-ID: I'm answering "Does nginx support Case insensitive exact location match?" It seems you are asking "Will or can nginx support Case insensitive exact location match?" You can check links below for discussion about URI Case insensitive. Quote from some answer: >>> In reality it depends on the web server. IIS is not case sensitive. Apache is.<<< For exact location match, nginx is case sensitive, and surely it can be insensitive. Will be or not ? https://stackoverflow.com/questions/15641694/are-uris-case-insensitive https://stackoverflow.com/questions/7996919/should-url-be-case-sensitive On Wed, May 20, 2015 at 6:03 PM, itpp2012 wrote: > J.J J Wrote: > ------------------------------------------------------- > > No, exact and inclusive match are case sensitive. > > > > You can write it in two location : > > location = /test > > location = /TEST > > > > Personally, I think it's a bad practice to differentiate requests by > > the > > case(ness) of URI. > > Hmm so you also need to add > > location = /Test > location = /TEst > location = /TESt > ........ > > It should be a server/location block config item. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,258977,259000#msg-259000 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun May 24 07:35:45 2015 From: nginx-forum at nginx.us (PavelPolyakov) Date: Sun, 24 May 2015 03:35:45 -0400 Subject: calling unique url not more then 1 time per 5 seconds Message-ID: <782417af8c696845d8bec0e141f23ead.NginxMailingListEnglish@forum.nginx.org> Hi, Assuming I have an url like /payout/[hash] , where hash is something unique, and I want to make, that on nginx level it's checked that this url is called not more then 1 time per 5 seconds. 1st time it should be processed by proxy_pass, all the other times it should be replied 403. Could someone tell me which approach I should use? Is that possible to do that using nginx? Any thoughts are appreciated. Regards, Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259082,259082#msg-259082 From nginx-forum at nginx.us Sun May 24 09:50:05 2015 From: nginx-forum at nginx.us (escavern) Date: Sun, 24 May 2015 05:50:05 -0400 Subject: can somebody help me to rewrite this? In-Reply-To: References: Message-ID: J.J J Wrote: ------------------------------------------------------- > Check this out: > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#rewrite > > On Sun, May 24, 2015 at 3:17 AM, escavern > wrote: > > > Hi guys, i really need some help here. I plan to move my forum from > the > > root > > domain to a subfolder named "/forum" > > > > I need to rewrite from: > > www.mywebsite.com/showthread.php?t=123456 > > > > To > > > > www.mywebsite.com/forum/showthread.php?t=123456 > > > > I hope you guys can help me to find out the rewrite rules. > > Thank you! > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,259077,259077#msg-259077 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I need direct url rewrite, not another refference link. Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259077,259083#msg-259083 From lists at ruby-forum.com Sun May 24 11:25:21 2015 From: lists at ruby-forum.com (Pavel Vasev) Date: Sun, 24 May 2015 13:25:21 +0200 Subject: Override Content-Type header with proxied requests In-Reply-To: <317ea1165d729f0c923db325a6b503ee.NginxMailingListEnglish@forum.nginx.org> References: <1d0575d72d18a4a90a6da9ea4ed2555e.NginxMailingListEnglish@forum.nginx.org> <4e321c644f1c79227141cb7d613e4531.NginxMailingListEnglish@forum.nginx.org> <262ee7220b55bfb635c430e46926a002.NginxMailingListEnglish@forum.nginx.org> <0c5791aefa1b4924d565436bc2c6000b.NginxMailingListEnglish@forum.nginx.org> <317ea1165d729f0c923db325a6b503ee.NginxMailingListEnglish@forum.nginx.org> Message-ID: <050ce9be15399a7a3eb14e5a4d1358f3@ruby-forum.com> Dear Manish! Could you please share the file with mime types that you use to include in a map? manish-ezest wrote in post #1155024: > Hello Wandenberg, > > Thanks for your help. Finally it is working. I included all the mime > types > in a file and included inside map directive and used it in the location > / {} > directive with proxy hide parameter like you suggested. > > --Manish > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,239473,252532#msg-252532 -- Posted via http://www.ruby-forum.com/. From semenukha at gmail.com Sun May 24 15:36:47 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Sun, 24 May 2015 11:36:47 -0400 Subject: calling unique url not more then 1 time per 5 seconds In-Reply-To: <782417af8c696845d8bec0e141f23ead.NginxMailingListEnglish@forum.nginx.org> References: <782417af8c696845d8bec0e141f23ead.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6989840.ZuphvMtMUH@hydra> On Sunday, May 24, 2015 03:35:45 AM PavelPolyakov wrote: > Hi, > > Assuming I have an url like /payout/[hash] , where hash is something unique, > and I want to make, that on nginx level it's checked that this url is called > not more then 1 time per 5 seconds. 1st time it should be processed by > proxy_pass, all the other times it should be replied 403. > > Could someone tell me which approach I should use? Is that possible to do > that using nginx? > > Any thoughts are appreciated. > > Regards, Yes, it's possible: http://nginx.org/r/limit_req_zone >If a rate of less than one request per second is desired, it is specified in request per minute (r/m). For example, half-request per second is 30r/m. -- Sincerely yours, Styopa Semenukha. From semenukha at gmail.com Sun May 24 15:40:53 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Sun, 24 May 2015 11:40:53 -0400 Subject: can somebody help me to rewrite this? In-Reply-To: References: Message-ID: <16658892.mKVmDr4e0K@hydra> On Sunday, May 24, 2015 05:50:05 AM escavern wrote: > I need direct url rewrite, not another refference link. Thank you And the mailing list needs requests for advice, not demands to do your job for you. We'll be happy to help if you have questions regarding the documentation, that has been kindly offered above. Thanks. -- Sincerely yours, Styopa Semenukha. From semenukha at gmail.com Sun May 24 15:46:09 2015 From: semenukha at gmail.com (Styopa Semenukha) Date: Sun, 24 May 2015 11:46:09 -0400 Subject: calling unique url not more then 1 time per 5 seconds In-Reply-To: <6989840.ZuphvMtMUH@hydra> References: <782417af8c696845d8bec0e141f23ead.NginxMailingListEnglish@forum.nginx.org> <6989840.ZuphvMtMUH@hydra> Message-ID: <1794968.fqirszKRzW@hydra> On Sunday, May 24, 2015 11:36:47 AM Styopa Semenukha wrote: > On Sunday, May 24, 2015 03:35:45 AM PavelPolyakov wrote: > > Hi, > > > > Assuming I have an url like /payout/[hash] , where hash is something unique, > > and I want to make, that on nginx level it's checked that this url is called > > not more then 1 time per 5 seconds. 1st time it should be processed by > > proxy_pass, all the other times it should be replied 403. > > > > Could someone tell me which approach I should use? Is that possible to do > > that using nginx? > > > > Any thoughts are appreciated. > > > > Regards, > > Yes, it's possible: > http://nginx.org/r/limit_req_zone > >If a rate of less than one request per second is desired, it is specified in request per minute (r/m). For example, half-request per second is 30r/m. > This is for the case you need to impose a limit of 12r/m per the entire "/payout/" location. But if you mean 12r/m per _individual_ hash, that might be tricky. -- Sincerely yours, Styopa Semenukha. From juliand at aspedia.net Mon May 25 05:44:26 2015 From: juliand at aspedia.net (Julian De Marchi) Date: Mon, 25 May 2015 15:44:26 +1000 Subject: Nginx Location Block Message-ID: <5562B6BA.8020502@aspedia.net> heya- I'm having some interesting dramas with Nginx location block. I put it down to a misconfiguration in my conf files, but I can't locate what it possible could be. Briefly, my setup is using an Nginx frontend server to do SSL offloading then pass requests to my backend Nginx servers which then process the request via fastCGI. My issue is when I try to access URIs like /cms/index.php?blah, the frontend Nginx gives 404. Access with /cms/ and Nginx passes the request to the backend. Here is my frontend location block: location / { limit_req zone=root burst=300; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header HTTPS on; proxy_set_header X-Forwarded-HTTPS on; proxy_set_header X-Forwarded-Protocol $scheme; proxy_pass http://webpool; } Here is my log entry for accessing the URI. "25/May/2015:14:33:01 +1000" "example.com" -275 428 "GET /cms/index.php HTTP/1.1" "404" "-" "10.107.0.8" "Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Firefox/31.0 Iceweasel/31.6.0" What I don't understand is this, why does Ngnix pass the URI /cms/ to my backend fine, but add /cms/index.php to the end and it does not pass to the backend? I've read and re-read the below and I'm drawing blanks, as my understanding is, it should work. - https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms - http://nginx.org/en/docs/http/ngx_http_core_module.html#location Many thanks in advance for helping me understand my problem. --julian From al-nginx at none.at Mon May 25 07:04:48 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 25 May 2015 09:04:48 +0200 Subject: Nginx Location Block In-Reply-To: <5562B6BA.8020502@aspedia.net> References: <5562B6BA.8020502@aspedia.net> Message-ID: Hi Julian. Am 25-05-2015 07:44, schrieb Julian De Marchi: > heya- > > I'm having some interesting dramas with Nginx location block. I put it > down to a misconfiguration in my conf files, but I can't locate what it > possible could be. > > Briefly, my setup is using an Nginx frontend server to do SSL > offloading > then pass requests to my backend Nginx servers which then process the > request via fastCGI. You must tell nginx to use fcgi. http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html but you config snippet shows. http://nginx.org/en/docs/http/ngx_http_proxy_module.html > My issue is when I try to access URIs like /cms/index.php?blah, the > frontend Nginx gives 404. Access with /cms/ and Nginx passes the > request > to the backend. > > Here is my frontend location block: > > location / { > limit_req zone=root burst=300; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header HTTPS on; > proxy_set_header X-Forwarded-HTTPS on; > proxy_set_header X-Forwarded-Protocol $scheme; > proxy_pass http://webpool; > } > [snipp] > I've read and re-read the below and I'm drawing blanks, as my > understanding is, it should work. > - > https://www.digitalocean.com/community/tutorials/understanding-nginx-server-and-location-block-selection-algorithms maybe you could also read. https://www.digitalocean.com/community/tutorials/understanding-and-implementing-fastcgi-proxying-in-nginx > - http://nginx.org/en/docs/http/ngx_http_core_module.html#location http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html and for deeper understanding try http://lmgtfy.com/?q=how+fast+cgi+work > Many thanks in advance for helping me understand my problem. HTH > --julian Aleks > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From juliand at aspedia.net Mon May 25 08:12:09 2015 From: juliand at aspedia.net (Julian De Marchi) Date: Mon, 25 May 2015 18:12:09 +1000 Subject: Nginx Location Block In-Reply-To: References: <5562B6BA.8020502@aspedia.net> Message-ID: <5562D959.2000504@aspedia.net> On 25/05/15 17:04, Aleksandar Lazic wrote: > Hi Julian. Heya Aleksandar, Thanks for your kind reply. > Am 25-05-2015 07:44, schrieb Julian De Marchi: >> heya- >> >> I'm having some interesting dramas with Nginx location block. I put it >> down to a misconfiguration in my conf files, but I can't locate what it >> possible could be. >> >> Briefly, my setup is using an Nginx frontend server to do SSL offloading >> then pass requests to my backend Nginx servers which then process the >> request via fastCGI. > > You must tell nginx to use fcgi. > > http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html > > but you config snippet shows. > I did not communicate clearly about my snippet. I apologize for this. The snippet is from my frontend Nginx server where the problem is occurring. I know this 100% as the 404 error is generated by the frontend Nginx server not my backend nginx server. --julian From vbart at nginx.com Mon May 25 11:56:23 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 25 May 2015 14:56:23 +0300 Subject: calling unique url not more then 1 time per 5 seconds In-Reply-To: <1794968.fqirszKRzW@hydra> References: <782417af8c696845d8bec0e141f23ead.NginxMailingListEnglish@forum.nginx.org> <6989840.ZuphvMtMUH@hydra> <1794968.fqirszKRzW@hydra> Message-ID: <6313839.DjXzvpe8e9@vbart-workstation> On Sunday 24 May 2015 11:46:09 Styopa Semenukha wrote: > On Sunday, May 24, 2015 11:36:47 AM Styopa Semenukha wrote: > > On Sunday, May 24, 2015 03:35:45 AM PavelPolyakov wrote: > > > Hi, > > > > > > Assuming I have an url like /payout/[hash] , where hash is something unique, > > > and I want to make, that on nginx level it's checked that this url is called > > > not more then 1 time per 5 seconds. 1st time it should be processed by > > > proxy_pass, all the other times it should be replied 403. > > > > > > Could someone tell me which approach I should use? Is that possible to do > > > that using nginx? > > > > > > Any thoughts are appreciated. > > > > > > Regards, > > > > Yes, it's possible: > > http://nginx.org/r/limit_req_zone > > >If a rate of less than one request per second is desired, it is specified in request per minute (r/m). For example, half-request per second is 30r/m. > > > > This is for the case you need to impose a limit of 12r/m per the entire "/payout/" location. > > But if you mean 12r/m per _individual_ hash, that might be tricky. > It's not tricky. It can be achieved by configuring $uri as the key. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Mon May 25 16:59:45 2015 From: nginx-forum at nginx.us (escavern) Date: Mon, 25 May 2015 12:59:45 -0400 Subject: can somebody help me to rewrite this? In-Reply-To: <16658892.mKVmDr4e0K@hydra> References: <16658892.mKVmDr4e0K@hydra> Message-ID: <98100302640b309115f2eb917c8d5136.NginxMailingListEnglish@forum.nginx.org> Styopa Semenukha Wrote: ------------------------------------------------------- > On Sunday, May 24, 2015 05:50:05 AM escavern wrote: > > I need direct url rewrite, not another refference link. Thank you > > And the mailing list needs requests for advice, not demands to do your > job for you. > > We'll be happy to help if you have questions regarding the > documentation, that has been kindly offered above. Thanks. > -- > Sincerely yours, > Styopa Semenukha. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx This is not the first time im asking the question in here, and i did get direct answer with url rewrites regex. And thanks you are really helpfull and skillfull with your answer. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259077,259110#msg-259110 From al-nginx at none.at Mon May 25 17:43:27 2015 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 25 May 2015 19:43:27 +0200 Subject: can somebody help me to rewrite this? In-Reply-To: <98100302640b309115f2eb917c8d5136.NginxMailingListEnglish@forum.nginx.org> References: <16658892.mKVmDr4e0K@hydra> <98100302640b309115f2eb917c8d5136.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi escavern. On 25-05-2015 18:59, escavern wrote: > Styopa Semenukha Wrote: > ------------------------------------------------------- >> On Sunday, May 24, 2015 05:50:05 AM escavern wrote: >> > I need direct url rewrite, not another refference link. Thank you >> >> And the mailing list needs requests for advice, not demands to do your >> job for you. >> >> We'll be happy to help if you have questions regarding the >> documentation, that has been kindly offered above. Thanks. > > This is not the first time im asking the question in here, and i did > get > direct answer with url rewrites regex. Have you also read the suggested links in the past answers? Here an overview of the past answers. https://marc.info/?l=nginx&w=2&r=1&s=escavern&q=b > And thanks you are really helpfull > and skillfull with your answer. To be honest you should think to use N+ with there very good and fast support http://nginx.com/products/ or try to learn and understand nginx to be able to solve such easy tasks by your self. Cheers Aleks PS: I'm not part of NGINX, Inc From nginx-forum at nginx.us Tue May 26 10:36:35 2015 From: nginx-forum at nginx.us (sampy) Date: Tue, 26 May 2015 06:36:35 -0400 Subject: Nginx and Websphere Message-ID: <3170a63c2bab358f56d62c1719f320f3.NginxMailingListEnglish@forum.nginx.org> Hi all! I'm new in NGINX but I have try everything with very bad results. My scenario is: NGINX balancing over 2 servers using ip_hash. This servers runs Websphere and in every Websphere runs several apps. Servers separately runs: wasint-1.domain.com:9080/WebApp wasint-2 domain.com:9080/WebApp My configuration: upstream webint { ip_hash; server wasint-1.domain.com:9080; server wasint-2.domain.com:9080; } server { listen 80; server_name web.domain.com; location / { proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://webint/WebApp; } } All seems to be ok, but when I type in the web browser http://web.domain.com doesn?t work. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259127#msg-259127 From mdounin at mdounin.ru Tue May 26 14:17:15 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 May 2015 17:17:15 +0300 Subject: nginx-1.9.1 Message-ID: <20150526141714.GP11860@mdounin.ru> Changes with nginx 1.9.1 26 May 2015 *) Change: now SSLv3 protocol is disabled by default. *) Change: some long deprecated directives are not supported anymore. *) Feature: the "reuseport" parameter of the "listen" directive. Thanks to Sepherosa Ziehau and Yingqi Lu. *) Feature: the $upstream_connect_time variable. *) Bugfix: in the "hash" directive on big-endian platforms. *) Bugfix: nginx might fail to start on some old Linux variants; the bug had appeared in 1.7.11. *) Bugfix: in IP address parsing. Thanks to Sergey Polovko. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue May 26 14:20:27 2015 From: nginx-forum at nginx.us (smsmaddy1981) Date: Tue, 26 May 2015 10:20:27 -0400 Subject: DNS configuration to invoke complete URL In-Reply-To: <25579669.Tjf99cA4CD@vbart-workstation> References: <25579669.Tjf99cA4CD@vbart-workstation> Message-ID: Thanks for the response #Vikrant and #Valentin Below are the issue details: What I have Configured: server { listen 80; server_name workspace.corp.no; location / { proxy_pass http://workspace.corp.no/workspace/agentLogin; } } Expected to see: accessing URL: http://workspace.corp.no should result in http://workspace.corp.no/workspace/agentLogin What I see: Indefinite requests piled up in the logs as shown below access.log /workspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace/agentLoginworkspace error.log 28457#0: *9100 socket() failed (24: Too many open files) while connecting to upstream, client: Suggest, how do I achieve invoking the DNS should call the required absolute URL? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,258982,259136#msg-259136 From reallfqq-nginx at yahoo.fr Tue May 26 15:53:00 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 26 May 2015 17:53:00 +0200 Subject: nginx on Debian: dynamic network interfaces Message-ID: Hello, Facing issues on Debian with official nginx packages, I filled a bug up on the Debian tracker which ended up as being closed since Debian refers to the faulty service(s), nginx being one. ?Bug #785253 ? ?I could not fill a bug on http://trac.nginx.org/nginx/ since OAuth 2.0 authentication through Google went away (already adressed in a separated thread). Could someone here confirms nginx (and not the OS) is to be adressed? Thanks, --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 26 17:11:37 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 26 May 2015 20:11:37 +0300 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: Message-ID: <20150526171137.GV11860@mdounin.ru> Hello! On Tue, May 26, 2015 at 05:53:00PM +0200, B.R. wrote: > Hello, > > Facing issues on Debian with official nginx packages, I filled a bug up on > the Debian tracker which ended up as being closed since Debian refers to > the faulty service(s), nginx being one. > > ?Bug #785253 ? > > ?I could not fill a bug on http://trac.nginx.org/nginx/ since OAuth 2.0 > authentication through Google went away (already adressed in a separated > thread). > > Could someone here confirms nginx (and not the OS) is to be adressed? Looking though the bug in question I don't see anything to fix in nginx. As far as I understand the problem, it is as follows: 1. you are trying to start nginx without access to DNS; 2. your nginx configuration contains DNS names and therefore requires access to DNS for nginx to start, as nginx resolves DNS names on startup; You should either configure your OS to ensure that you start nginx after DNS is available or change your nginx configuration to avoid DNS names. -- Maxim Dounin http://nginx.org/ From skaurus at gmail.com Tue May 26 18:12:18 2015 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Tue, 26 May 2015 21:12:18 +0300 Subject: Measuring request processing time Message-ID: Hi! Is there a way to measure time Nginx takes to process request? Not $request_time, but rather something like "time between request was fully read and request is ready to pass it to the backend". I need it to evaluate perfomance of the geoip2 module: https://github.com/leev/ngx_http_geoip2_module I need this because I've measured speed of official MaxMind Perl modules for legacy and new versions of their databases and found that lib for new version is hundreds times slower than legacy. (yes, I've used XS version) Now, I will be using new format anyway - because MaxMind provide only free legacy databases, and free databases have way too bad accuracy. But I would like to assess the consequences. Best regards, Dmitriy Shalashov -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue May 26 22:47:19 2015 From: nginx-forum at nginx.us (sentinel21) Date: Tue, 26 May 2015 18:47:19 -0400 Subject: host names in 1.9 Message-ID: <2d136636bf555eb434a90e65e3e858ad.NginxMailingListEnglish@forum.nginx.org> I'm having some difficulty using the stream directive in 1.9. I may be using it wrong, so please correct me if this is incorrect or not possible. stream { server { listen 1520; server_name customhost.example.com proxy_pass db; } upstream db { server 10.100.0.104:1523; } } I have multiple host names (server_name s) that I want to hit 1520 and be proxied to different upstream dbs. I have a second stream context with a different server_name and different upstream db. I can't get this to work. I think I'm missing something. How would I accomplish this? ultimate goal: reverse proxy a database connection based on host name. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259144,259144#msg-259144 From francis at daoine.org Tue May 26 23:29:05 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 00:29:05 +0100 Subject: host names in 1.9 In-Reply-To: <2d136636bf555eb434a90e65e3e858ad.NginxMailingListEnglish@forum.nginx.org> References: <2d136636bf555eb434a90e65e3e858ad.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150526232905.GD2957@daoine.org> On Tue, May 26, 2015 at 06:47:19PM -0400, sentinel21 wrote: Hi there, > I'm having some difficulty using the stream directive in 1.9. I may be > using it wrong, so please correct me if this is incorrect or not possible. A connection comes to an ip:port. A http connection includes a specific host name in a well-known place in the request. An arbitrary tcp connection does not. > stream { > server { > listen 1520; Untested, but: listen on ip1:1520 here, and ip2:1520 in the next server block; then have the different host names resolve to different ip addresses. > I have multiple host names (server_name s) that I want to hit 1520 and be > proxied to different upstream dbs. I have a second stream context with a > different server_name and different upstream db. I can't get this to work. > I think I'm missing something. How would I accomplish this? ultimate goal: > reverse proxy a database connection based on host name. I can't see any other way to achieve what you want, for a generic tcp forwarder. hostname is irrelevant - all nginx sees is the ip:port connected to. f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 26 23:35:31 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 00:35:31 +0100 Subject: DNS configuration to invoke complete URL In-Reply-To: References: <25579669.Tjf99cA4CD@vbart-workstation> Message-ID: <20150526233531.GE2957@daoine.org> On Tue, May 26, 2015 at 10:20:27AM -0400, smsmaddy1981 wrote: Hi there, > server { > listen 80; > server_name workspace.corp.no; > > location / { > proxy_pass http://workspace.corp.no/workspace/agentLogin; > } > } > > Expected to see: > accessing URL: http://workspace.corp.no should result in > http://workspace.corp.no/workspace/agentLogin What, specifically, do you mean by that? You want to see a http redirect to the new url? Or you want to see some other content? Where does that other content come from? > What I see: > Indefinite requests piled up in the logs as shown below That infinite loop matches what you have configured (assuming that the nginx machine resolves the name workspace.corp.no to an address that this nginx listens on). > Suggest, how do I achieve invoking the DNS should call the required absolute > URL? I'm afraid that I do not know what you mean by that. After you explain the behaviour you want, it may become clear how to configure nginx to achieve that behaviour. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 26 23:39:33 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 00:39:33 +0100 Subject: Nginx and Websphere In-Reply-To: <3170a63c2bab358f56d62c1719f320f3.NginxMailingListEnglish@forum.nginx.org> References: <3170a63c2bab358f56d62c1719f320f3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150526233933.GF2957@daoine.org> On Tue, May 26, 2015 at 06:36:35AM -0400, sampy wrote: Hi there, > NGINX balancing over 2 servers using ip_hash. This servers runs Websphere > and in every Websphere runs several apps. > > Servers separately runs: > > wasint-1.domain.com:9080/WebApp > wasint-2 domain.com:9080/WebApp > > My configuration: > > upstream webint { > ip_hash; > server wasint-1.domain.com:9080; > server wasint-2.domain.com:9080; > } > > server { > > listen 80; > server_name web.domain.com; > > > location / { > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_pass http://webint/WebApp; > } > > } > > All seems to be ok, but when I type in the web browser http://web.domain.com > doesn?t work. What does "doesn't work" mean? What response do you get? What response did you want to get instead? What response do you get if you do curl -H Host:web.domain.com -i http://wasint-1.domain.com:9080/WebApp from the nginx server? Does that match one of the previous two answers? f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 26 23:45:58 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 00:45:58 +0100 Subject: Nginx Location Block In-Reply-To: <5562B6BA.8020502@aspedia.net> References: <5562B6BA.8020502@aspedia.net> Message-ID: <20150526234558.GG2957@daoine.org> On Mon, May 25, 2015 at 03:44:26PM +1000, Julian De Marchi wrote: Hi there, > Briefly, my setup is using an Nginx frontend server to do SSL offloading > then pass requests to my backend Nginx servers which then process the > request via fastCGI. > > My issue is when I try to access URIs like /cms/index.php?blah, the > frontend Nginx gives 404. Access with /cms/ and Nginx passes the request > to the backend. > > Here is my frontend location block: Is this the only location{} block in the only server{} block in your frontend nginx.conf file? Are there any "include" directives that might add others? Are there any rewrite-module directives outside this location{}? > What I don't understand is this, why does Ngnix pass the URI /cms/ to my > backend fine, but add /cms/index.php to the end and it does not pass to > the backend? How do you know that it does not pass it to the backend? Do you watch a tcpdump trace, or do you watch the backend access log, or something like that? f -- Francis Daly francis at daoine.org From juliand at aspedia.net Tue May 26 23:58:42 2015 From: juliand at aspedia.net (Julian De Marchi) Date: Wed, 27 May 2015 09:58:42 +1000 Subject: Nginx Location Block In-Reply-To: <20150526234558.GG2957@daoine.org> References: <5562B6BA.8020502@aspedia.net> <20150526234558.GG2957@daoine.org> Message-ID: <556508B2.2090301@aspedia.net> On 27/05/15 09:45, Francis Daly wrote: > On Mon, May 25, 2015 at 03:44:26PM +1000, Julian De Marchi wrote: > > Hi there, > >> Briefly, my setup is using an Nginx frontend server to do SSL offloading >> then pass requests to my backend Nginx servers which then process the >> request via fastCGI. >> >> My issue is when I try to access URIs like /cms/index.php?blah, the >> frontend Nginx gives 404. Access with /cms/ and Nginx passes the request >> to the backend. >> >> Here is my frontend location block: > > Is this the only location{} block in the only server{} block in your > frontend nginx.conf file? Are there any "include" directives that > might add others? Are there any rewrite-module directives outside this > location{}? For this virtual host, yes it is. My rewriting logic is performed in the backend. There are no include directives. >> What I don't understand is this, why does Ngnix pass the URI /cms/ to my >> backend fine, but add /cms/index.php to the end and it does not pass to >> the backend? > > How do you know that it does not pass it to the backend? > > Do you watch a tcpdump trace, or do you watch the backend access log, > or something like that? I'm watching the backend access log for the virtual host and never see it come in. When a requests reaches the backend I also insert headers so I can tell which backend server handled the request. --julian From francis at daoine.org Wed May 27 00:15:08 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 01:15:08 +0100 Subject: Nginx Location Block In-Reply-To: <556508B2.2090301@aspedia.net> References: <5562B6BA.8020502@aspedia.net> <20150526234558.GG2957@daoine.org> <556508B2.2090301@aspedia.net> Message-ID: <20150527001508.GH2957@daoine.org> On Wed, May 27, 2015 at 09:58:42AM +1000, Julian De Marchi wrote: > On 27/05/15 09:45, Francis Daly wrote: > > On Mon, May 25, 2015 at 03:44:26PM +1000, Julian De Marchi wrote: Hi there, > >> My issue is when I try to access URIs like /cms/index.php?blah, the > >> frontend Nginx gives 404. Access with /cms/ and Nginx passes the request > >> to the backend. > > Is this the only location{} block in the only server{} block in your > > frontend nginx.conf file? Are there any "include" directives that > > might add others? Are there any rewrite-module directives outside this > > location{}? > > For this virtual host, yes it is. My rewriting logic is performed in the > backend. There are no include directives. I am unable to reproduce what you report. The configuration you have shown, plus some assumed configuration that you have not shown, should not do what you report it does do. Can you try using a separate access_log in this frontend server{} block, and confirming that the request is handled by this server{} and not by any other one? > > How do you know that it does not pass it to the backend? > I'm watching the backend access log for the virtual host and never see > it come in. When a requests reaches the backend I also insert headers so > I can tell which backend server handled the request. Does the backend configuration include anything that would log to a different access log, or not log at all? Or can you (temporarily, or on a test system) replace the backend with one that just does "return 200 here", and confirm that you do not get that response when you make the second request? Is there any form of caching going on? Are you testing using "curl" or a bigger browser? f -- Francis Daly francis at daoine.org From juliand at aspedia.net Wed May 27 00:42:49 2015 From: juliand at aspedia.net (Julian De Marchi) Date: Wed, 27 May 2015 10:42:49 +1000 Subject: Nginx Location Block In-Reply-To: <20150527001508.GH2957@daoine.org> References: <5562B6BA.8020502@aspedia.net> <20150526234558.GG2957@daoine.org> <556508B2.2090301@aspedia.net> <20150527001508.GH2957@daoine.org> Message-ID: <55651309.6060004@aspedia.net> On 27/05/15 10:15, Francis Daly wrote: > Does the backend configuration include anything that would log to a > different access log, or not log at all? Apparently so... When I access via the frontend my error log for the backend looks like below. Which is quite normal. [haweb-05.vpn] 10:31:58 [err] [local7] [nginx] 2015/05/27 10:31:58 [error] 5060#0: *135068 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 10.106.1.19, server: example.com, request: "GET /cms/index.php HTTP/1.0", upstream: "fastcgi://unix:/var/run/php53-fpm.sock:", host: "example.com" However when I modify my hosts file to talk directly to one of my backend servers I get the 404. I'll figure out the logs later, but now I have the answers I'm after and the culprit of my issue. location @rewrite { # Some modules enforce no slash (/) at the end of the URL # Else this rewrite block wouldnt be needed (GlobalRedirect) rewrite ^/(.*)$ /index.php?q=$1; } Thanks for your help. --julian From reallfqq-nginx at yahoo.fr Wed May 27 07:51:40 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 27 May 2015 09:51:40 +0200 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: <20150526171137.GV11860@mdounin.ru> References: <20150526171137.GV11860@mdounin.ru> Message-ID: The keyword here is 'dynamic'. I even modified the service configuration to depend on 'named', but the thing is that being dynamic, the network and name resolution dependency might be fulfilled while the actual service is not ready yet. Those dependencies might (dis)appear following this 'hot-plug' behavior and services should adapt to this. 'auto' (synchronous) differs from 'hot-plug' (asynchronous). Is there really nothing you can do on your side about it? It is a little tiresome to see both parts throwing the ball back. ?Thanks,? --- *B. R.* On Tue, May 26, 2015 at 7:11 PM, Maxim Dounin wrote: > Hello! > > On Tue, May 26, 2015 at 05:53:00PM +0200, B.R. wrote: > > > Hello, > > > > Facing issues on Debian with official nginx packages, I filled a bug up > on > > the Debian tracker which ended up as being closed since Debian refers to > > the faulty service(s), nginx being one. > > > > ?Bug #785253 ? > > > > ?I could not fill a bug on http://trac.nginx.org/nginx/ since OAuth 2.0 > > authentication through Google went away (already adressed in a separated > > thread). > > > > Could someone here confirms nginx (and not the OS) is to be adressed? > > Looking though the bug in question I don't see anything to fix in > nginx. As far as I understand the problem, it is as follows: > > 1. you are trying to start nginx without access to DNS; > 2. your nginx configuration contains DNS names and therefore > requires access to DNS for nginx to start, as nginx resolves > DNS names on startup; > > You should either configure your OS to ensure that you start nginx > after DNS is available or change your nginx configuration to avoid > DNS names. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From thresh at nginx.com Wed May 27 10:53:55 2015 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 27 May 2015 13:53:55 +0300 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: <20150526171137.GV11860@mdounin.ru> Message-ID: <5565A243.1070600@nginx.com> On 27/05/2015 10:51, B.R. wrote: > The keyword here is 'dynamic'. > > I even modified the service configuration to depend on 'named', but the > thing is that being dynamic, the network and name resolution dependency > might be fulfilled while the actual service is not ready yet. Those > dependencies might (dis)appear following this 'hot-plug' behavior and > services should adapt to this. > 'auto' (synchronous) differs from 'hot-plug' (asynchronous). > > Is there really nothing you can do on your side about it? > It is a little tiresome to see both parts throwing the ball back. JFYI, systemd works around that issue by introducing a pseudo-service that essentially sleeps for some amount of time until network is (hopefully) up, see http://www.freedesktop.org/wiki/Software/systemd/NetworkTarget/ . -- Konstantin Pavlov From nginx-forum at nginx.us Wed May 27 11:27:25 2015 From: nginx-forum at nginx.us (sampy) Date: Wed, 27 May 2015 07:27:25 -0400 Subject: Nginx and Websphere In-Reply-To: <20150526233933.GF2957@daoine.org> References: <20150526233933.GF2957@daoine.org> Message-ID: <3debb78d282f6de9208f8828c5d09d62.NginxMailingListEnglish@forum.nginx.org> Hi Francis, What does "doesn't work" mean? web browser don't show the web and appears the error "webint don't exists in DNS", but query arrives to the balanced server What response do you get? "webint don't exists in DNS" but as I know "webint" is only a variable or a name for the upstream What response did you want to get instead? Oviously the web What response do you get if you do curl -H Host:web.domain.com -i http://wasint-1.domain.com:9080/WebApp from the nginx server? HTTP/1.1 302 Found X-Powered-By: Servlet/3.0 Location: http://web.domain.com:9080/WebApp/ Content-Language: es-ES Content-Length: 0 Does that match one of the previous two answers? Yes, but don't. Let me explain. It appears that the proxy_pass works, because adds the port and /WebApp, but don't show the web that is served in the host. Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259162#msg-259162 From richard at kearsley.me Wed May 27 12:22:56 2015 From: richard at kearsley.me (Richard Kearsley) Date: Wed, 27 May 2015 13:22:56 +0100 Subject: unknown directive "thread_pool" Message-ID: <5565B720.2060507@kearsley.me> Hi First time trying aio threads on linux, and I am getting this error [emerg] 19909#0: unknown directive "thread_pool" in /usr/local/nginx/conf/nginx.conf:7 Line7 reads: thread_pool testpool threads=64 max_queue=65536; Everything indicates it was built --with-threads, so I'm not sure where to go from here nginx -V: /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.9.1 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) configure arguments: --with-debug --with-file-aio --with-threads from configure output: Configuration summary + using threads Any help appreciated Thanks Richard From francis at daoine.org Wed May 27 13:20:21 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 14:20:21 +0100 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: <20150526171137.GV11860@mdounin.ru> Message-ID: <20150527132021.GJ2957@daoine.org> On Wed, May 27, 2015 at 09:51:40AM +0200, B.R. wrote: Hi there, > Is there really nothing you can do on your side about it? > It is a little tiresome to see both parts throwing the ball back. If nginx sees host names in particular places in the config file, it currently chooses to resolve them at start time. I suspect it will be slow for you to get a patch that changes that behaviour accepted into nginx. You can configure your system resolver to avoid needing DNS, by putting the host names into /etc/hosts. You can configure your nginx to avoid resolving the host names, by one of: * not using host names, instead using IP addresses * using host names, and including an upstream{} which includes the IP address * including a variable in the directive that includes the host name, so nginx will not try to resolve it at start time (which is pretty much the previous answer you got, but with a few specific ways that you can implement it). Good luck with it, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed May 27 13:30:53 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 May 2015 16:30:53 +0300 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: <20150526171137.GV11860@mdounin.ru> Message-ID: <20150527133052.GC80666@mdounin.ru> Hello! On Wed, May 27, 2015 at 09:51:40AM +0200, B.R. wrote: > The keyword here is 'dynamic'. > > I even modified the service configuration to depend on 'named', but the > thing is that being dynamic, the network and name resolution dependency > might be fulfilled while the actual service is not ready yet. Those > dependencies might (dis)appear following this 'hot-plug' behavior and > services should adapt to this. > 'auto' (synchronous) differs from 'hot-plug' (asynchronous). > > Is there really nothing you can do on your side about it? > It is a little tiresome to see both parts throwing the ball back. As I already tried to explain, that's not about "both parts", but rather about configuration you wrote for both parts. It's up to you to change configuration to something consistent. You have to either change your OS configuration (to ensure that appropriate names are resolvable on nginx start), or change your nginx configuration (to ensure it won't try to resolve names). -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed May 27 13:33:04 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 14:33:04 +0100 Subject: Nginx and Websphere In-Reply-To: <3debb78d282f6de9208f8828c5d09d62.NginxMailingListEnglish@forum.nginx.org> References: <20150526233933.GF2957@daoine.org> <3debb78d282f6de9208f8828c5d09d62.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150527133304.GK2957@daoine.org> On Wed, May 27, 2015 at 07:27:25AM -0400, sampy wrote: Hi there, > What does "doesn't work" mean? > > web browser don't show the web and appears the error "webint don't exists > in DNS", but query arrives to the balanced server You see "webint", but you include "proxy_set_header Host $host;" in your config? That is unexpected to me. Could you show the response to curl -i http://web.domain.com/ from your client machine? Perhaps there is more than one redirection happening. > What response did you want to get instead? > > Oviously the web What http response? A 301 redirect to a different url; or a 200 with some specific content? (Assume that I don't know anything about WebSphere.) > What response do you get if you do > > curl -H Host:web.domain.com -i http://wasint-1.domain.com:9080/WebApp > > from the nginx server? > > HTTP/1.1 302 Found > > X-Powered-By: Servlet/3.0 > Location: http://web.domain.com:9080/WebApp/ > Content-Language: es-ES > Content-Length: 0 Ok - after you show the response from the new curl command above -- the one that you make of your nginx server directly -- it might become clear whether that Location: header is being modified, or what it should be modified to be. http://nginx.org/r/proxy_redirect may be useful. (Or just removing the "proxy_set_header Host $host;".) > Does that match one of the previous two answers? > > Yes, but don't. Let me explain. It appears that the proxy_pass works, > because adds the port and /WebApp, but don't show the web that is served in > the host. One request gets one response. Take the requests one at a time, and the path to the desired end state may become clear. (For example: I suspect that you may want to change the proxy_pass to be just to "http://webint", and you may want an extra "location=/{return 301 /WebApp/;}", but the response from the first few requests will probably show whether or not that is useful.) Cheers, f -- Francis Daly francis at daoine.org From ahutchings at nginx.com Wed May 27 13:34:20 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 27 May 2015 14:34:20 +0100 Subject: unknown directive "thread_pool" In-Reply-To: <5565B720.2060507@kearsley.me> References: <5565B720.2060507@kearsley.me> Message-ID: Hi Richard, Do you have an Ubuntu package for Nginx installed also (usually installed in /usr/sbin)? Depending on how you are starting it the wrong executable may be being used. Kind Regards Andrew > On 27 May 2015, at 13:22, Richard Kearsley wrote: > > Hi > First time trying aio threads on linux, and I am getting this error > > [emerg] 19909#0: unknown directive "thread_pool" in /usr/local/nginx/conf/nginx.conf:7 > > Line7 reads: > thread_pool testpool threads=64 max_queue=65536; > > Everything indicates it was built --with-threads, so I'm not sure where to go from here > > nginx -V: > /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.9.1 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) configure arguments: --with-debug --with-file-aio --with-threads > > from configure output: > Configuration summary > + using threads > > Any help appreciated > Thanks > Richard > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From nginx-forum at nginx.us Wed May 27 13:53:26 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 27 May 2015 09:53:26 -0400 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: Message-ID: B.R. Wrote: ------------------------------------------------------- > The keyword here is 'dynamic'. > > I even modified the service configuration to depend on 'named', but > the > thing is that being dynamic, the network and name resolution > dependency Why not simplify things, set nginx start to manual, create a service that runs a script, inside that script attempt to resolve some dns name, when it fails wait a while and loop back, when it succeeds start nginx and exit script. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259137,259175#msg-259175 From skaurus at gmail.com Wed May 27 14:08:52 2015 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Wed, 27 May 2015 17:08:52 +0300 Subject: Fwd: Measuring request processing time In-Reply-To: References: Message-ID: Hi! Is there a way to measure time Nginx takes to process request? Not $request_time, but rather something like "time between request was fully read and request is ready to pass it to the backend". I need it to evaluate perfomance of the geoip2 module: https://github.com/leev/ngx_http_geoip2_module I need this because I've measured speed of official MaxMind Perl modules for legacy and new versions of their databases and found that lib for new version is hundreds times slower than legacy. (yes, I've used XS version) Now, I will be using new format anyway - because MaxMind provide only free legacy databases, and free databases have way too bad accuracy. But I would like to assess the consequences. Maybe $request_time - $upstream_response_time will fit? Best regards, Dmitriy Shalashov -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 27 14:53:47 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 May 2015 17:53:47 +0300 Subject: Fwd: Measuring request processing time In-Reply-To: References: Message-ID: <20150527145346.GD80666@mdounin.ru> Hello! On Wed, May 27, 2015 at 05:08:52PM +0300, ??????? ??????? wrote: > Hi! > > Is there a way to measure time Nginx takes to process request? Not > $request_time, but rather something like "time between request was fully > read and request is ready to pass it to the backend". > I need it to evaluate perfomance of the geoip2 module: > https://github.com/leev/ngx_http_geoip2_module > > I need this because I've measured speed of official MaxMind Perl modules > for legacy and new versions of their databases and found that lib for new > version is hundreds times slower than legacy. (yes, I've used XS version) > Now, I will be using new format anyway - because MaxMind provide only free > legacy databases, and free databases have way too bad accuracy. > But I would like to assess the consequences. > > Maybe $request_time - $upstream_response_time will fit? I don't think that resolution of nginx time-related variables will be enough to measure geoip lookup times. If you want to evaluate performance, I would rather suggest to write some simple configs like: location = /geoip1 { return 200 $geoip_country_code; } location = /geoip2 { return 200 $geoip2_data_country_code; } location = /static { return 200 XX; } and to try benchmarking them with something like wrk. -- Maxim Dounin http://nginx.org/ From richard at kearsley.me Wed May 27 14:57:23 2015 From: richard at kearsley.me (Richard Kearsley) Date: Wed, 27 May 2015 15:57:23 +0100 Subject: unknown directive "thread_pool" In-Reply-To: References: <5565B720.2060507@kearsley.me> Message-ID: <5565DB53.8080204@kearsley.me> Thanks Andrew I figured this out but it was not a duplicate binary It was because I was issuing -HUP to reload nginx rather than proper start/stop :-[ Cheers Richard On 27/05/15 14:34, Andrew Hutchings wrote: > Hi Richard, > > Do you have an Ubuntu package for Nginx installed also (usually installed in /usr/sbin)? Depending on how you are starting it the wrong executable may be being used. > > Kind Regards > Andrew > >> On 27 May 2015, at 13:22, Richard Kearsley wrote: >> >> Hi >> First time trying aio threads on linux, and I am getting this error >> >> [emerg] 19909#0: unknown directive "thread_pool" in /usr/local/nginx/conf/nginx.conf:7 >> >> Line7 reads: >> thread_pool testpool threads=64 max_queue=65536; >> >> Everything indicates it was built --with-threads, so I'm not sure where to go from here >> >> nginx -V: >> /usr/local/nginx/sbin/nginx -V >> nginx version: nginx/1.9.1 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) configure arguments: --with-debug --with-file-aio --with-threads >> >> from configure output: >> Configuration summary >> + using threads >> >> Any help appreciated >> Thanks >> Richard >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> From ecronik at gmail.com Wed May 27 15:05:04 2015 From: ecronik at gmail.com (Tim Beyer) Date: Wed, 27 May 2015 17:05:04 +0200 Subject: NGINX rtmp module multiple push and grab from another app? Message-ID: Hey, guys! I have problems with the rtmp module. There is an input application, that pushes "xyz" to the "2nd" application, that exec_pushes with ffmpeg to a "3rd" application. From "3rd" it gets pushed to 2 rtmp servers and to "rtmp://localhost/4th" - The problem is, that I can't grab "rtmp://IP/4th/xyz" for some reason, but it's OK when I try from "3rd". Any ideas? :/ Best regards, Tim -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Wed May 27 16:01:38 2015 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Wed, 27 May 2015 18:01:38 +0200 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: Message-ID: Thanks to all for your input. @Konstantin 'workaround' has a very specific meaning there is a bug and a way to avoid it. I want to address the bug where it lies, the latter being unclear still. systemd has probably benefits and some drawbacks which are important to me: I am sticking to sysvinit. @Francis Thanks for your input. I know that not using names meanings name resolution won't be needed. However cutting in features works around bugs, meaning there is one. DNS has been invented because of many difficulties involved when you use IP addresses: using names in a configuration, especially for an external resolver over which I do not have control is the way to go. I do not want to make breakable configurations simply because my OS and one service does not understand each other. I made all that fuss and I am spending my time to identify a bug and address it. FYI, using the old 'auto' (synchronous) mode on the network service is also a workaround, which seems more reliable to me. I do not want to stick with this since the OS default is to use 'hotplug'. That is a workaround... I want the chase the bug. (Am I repeating myself? ;o) ) @Maxim I understand from your words that using hotplug interfaces is currently not supported by nginx (as I observed). Now there are (at least) 2 ways of seing it: 1?) Considering the asynchronous nature of the new default network configuration of Debian, the services should adapt to handle cases where the network is 'not really up' or some features are missing (no IP address bound, name resolution not working) --> Debian's stance, making nginx service faulty 2?) The network should be up when (and only when) it is advertised so, making the services dependencies on system facilities reliable and safe --> Debian OS is faulty Removing features because 'it does not work' is in no way a solution to my eyes. There might have huge divergence on how to embrace things, but if Debian's claims on the fact service should have proper support are right and {if you disagree with the way Debian has taken or you do not want to change your way}, you shall then declare yourself not supporting the Debian distro. @itpp2012 I love your way of 'simplifying' things. I probably differ on the definition, since to me 'simpler' converges towards 'standard' and/or 'default'. Making scripts is one of the multiple workarounds, but that is definitely not the solution. Will you make everyone using nginx on Debian using that trick, as soon as they need DNS on a default 'hotplug' interface with sysvinit? --- *B. R.* On Wed, May 27, 2015 at 3:53 PM, itpp2012 wrote: > B.R. Wrote: > ------------------------------------------------------- > > The keyword here is 'dynamic'. > > > > I even modified the service configuration to depend on 'named', but > > the > > thing is that being dynamic, the network and name resolution > > dependency > > Why not simplify things, set nginx start to manual, create a service that > runs a script, inside that script attempt to resolve some dns name, when it > fails wait a while and loop back, when it succeeds start nginx and exit > script. > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259137,259175#msg-259175 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From skaurus at gmail.com Wed May 27 16:28:56 2015 From: skaurus at gmail.com (=?UTF-8?B?0JTQvNC40YLRgNC40Lkg0KjQsNC70LDRiNC+0LI=?=) Date: Wed, 27 May 2015 19:28:56 +0300 Subject: Fwd: Measuring request processing time In-Reply-To: <20150527145346.GD80666@mdounin.ru> References: <20150527145346.GD80666@mdounin.ru> Message-ID: Thanks! That is a good idea. So Nginx will do some work to fill these variables only in their corresponding locations? Best regards, Dmitriy Shalashov 2015-05-27 17:53 GMT+03:00 Maxim Dounin : > Hello! > > On Wed, May 27, 2015 at 05:08:52PM +0300, ??????? ??????? wrote: > > > Hi! > > > > Is there a way to measure time Nginx takes to process request? Not > > $request_time, but rather something like "time between request was fully > > read and request is ready to pass it to the backend". > > I need it to evaluate perfomance of the geoip2 module: > > https://github.com/leev/ngx_http_geoip2_module > > > > I need this because I've measured speed of official MaxMind Perl modules > > for legacy and new versions of their databases and found that lib for new > > version is hundreds times slower than legacy. (yes, I've used XS version) > > Now, I will be using new format anyway - because MaxMind provide only > free > > legacy databases, and free databases have way too bad accuracy. > > But I would like to assess the consequences. > > > > Maybe $request_time - $upstream_response_time will fit? > > I don't think that resolution of nginx time-related variables will > be enough to measure geoip lookup times. If you want to evaluate > performance, I would rather suggest to write some simple configs > like: > > location = /geoip1 { > return 200 $geoip_country_code; > } > > location = /geoip2 { > return 200 $geoip2_data_country_code; > } > > location = /static { > return 200 XX; > } > > and to try benchmarking them with something like wrk. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed May 27 16:40:26 2015 From: nginx-forum at nginx.us (itpp2012) Date: Wed, 27 May 2015 12:40:26 -0400 Subject: nginx on Debian: dynamic network interfaces In-Reply-To: References: Message-ID: B.R. Wrote: ------------------------------------------------------- > Will you make everyone using nginx on Debian using > that > trick, as soon as they need DNS on a default 'hotplug' interface with > sysvinit? No I'd make everyone use IP addresses with the EBLB I've introduced a while ago with sources. For a fast acting webservice DNS is outdated, outperformed and is only useful for clients. If it was up to me I'd rip dns out completely from nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259137,259190#msg-259190 From mdounin at mdounin.ru Wed May 27 18:09:58 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 27 May 2015 21:09:58 +0300 Subject: Fwd: Measuring request processing time In-Reply-To: References: <20150527145346.GD80666@mdounin.ru> Message-ID: <20150527180958.GI80666@mdounin.ru> Hello! On Wed, May 27, 2015 at 07:28:56PM +0300, ??????? ??????? wrote: > Thanks! That is a good idea. > > So Nginx will do some work to fill these variables only in their > corresponding locations? Yes, both standard geoip module and 3rd party geoip2 module you've linked only do database lookups if you try to use corresponding variables. -- Maxim Dounin http://nginx.org/ From ahutchings at nginx.com Wed May 27 19:26:27 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 27 May 2015 20:26:27 +0100 Subject: unknown directive "thread_pool" In-Reply-To: <5565DB53.8080204@kearsley.me> References: <5565B720.2060507@kearsley.me> <5565DB53.8080204@kearsley.me> Message-ID: <6C4D3983-F9E9-47AF-B9A0-9576C76BAB1F@nginx.com> Ah! That was my next guess. I?m glad you sorted it. If you have this problem again there is a way to do a rolling upgrade without a start/stop, see this link for more info: http://nginx.org/en/docs/control.html#upgrade Kind Regards Andrew > On 27 May 2015, at 15:57, Richard Kearsley wrote: > > Thanks Andrew > I figured this out but it was not a duplicate binary > It was because I was issuing -HUP to reload nginx rather than proper start/stop :-[ > > Cheers > Richard > > On 27/05/15 14:34, Andrew Hutchings wrote: >> Hi Richard, >> >> Do you have an Ubuntu package for Nginx installed also (usually installed in /usr/sbin)? Depending on how you are starting it the wrong executable may be being used. >> >> Kind Regards >> Andrew >> >>> On 27 May 2015, at 13:22, Richard Kearsley wrote: >>> >>> Hi >>> First time trying aio threads on linux, and I am getting this error >>> >>> [emerg] 19909#0: unknown directive "thread_pool" in /usr/local/nginx/conf/nginx.conf:7 >>> >>> Line7 reads: >>> thread_pool testpool threads=64 max_queue=65536; >>> >>> Everything indicates it was built --with-threads, so I'm not sure where to go from here >>> >>> nginx -V: >>> /usr/local/nginx/sbin/nginx -V >>> nginx version: nginx/1.9.1 built by gcc 4.8.2 (Ubuntu 4.8.2-19ubuntu1) configure arguments: --with-debug --with-file-aio --with-threads >>> >>> from configure output: >>> Configuration summary >>> + using threads >>> >>> Any help appreciated >>> Thanks >>> Richard >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From francis at daoine.org Wed May 27 21:55:21 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 22:55:21 +0100 Subject: Static files bad loading time In-Reply-To: <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> References: <20150430171511.GF29618@daoine.org> <35f04298fa40033220d845dbe1604457.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150527215521.GM2957@daoine.org> On Thu, May 07, 2015 at 01:56:02PM -0400, grigory wrote: Hi there, > > Can you tell from nginx logs whether the slowness is due to > > slow-read-from-disk, or slow-write-to-client, or something else? > > Could you please tell me how to check this out? > My nginx logs do not contain this sort of information. I'm actually not sure how to go about that. Possibly there will be details in the debug log? But you do not want to run the debug log on a busy system that only sporadically shows the problem. Possibly a lighter way of trying to identify a pattern is to include $request_time in you normal access_log. Then when you see the slowness, you can identify the request in the logs, and see is there any pattern that way -- does it always and only happen when there are more than 100 other concurrent requests; or at that start of a minute when something else on the system is busy starting; or something that is common to these requests and not to others. (Or maybe the common feature is not something that nginx can see.) > > If you make the request from the machine itself, so network issues should > > be minor, does it still show sometimes being slow? > > When I make request from machine itself, the image loads pretty fast. If that remains true when you make the request a hundred times, that suggests that the problem is outside of nginx's control, in the system networking or (more likely) the network outside the nginx server. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed May 27 21:55:23 2015 From: nginx-forum at nginx.us (birimblongas) Date: Wed, 27 May 2015 17:55:23 -0400 Subject: sock error Message-ID: <33d149c0aa3a8ed914bea4a6bf5e9e43.NginxMailingListEnglish@forum.nginx.org> Hi, i have a rails app using unicorn + nginx. Last month my app started to get really slow and giving me error 502. Unicorn log doesn't show nothing, either my rails app log. But looking at nginx error log, i get numerous: [error] 1837#0: *263157 connect() to unix:/tmp/.app.sock failed (11: Resource temporarily unavailable) while connecting to upstream, What could it be? What causes this? If I change from socket to tcp connection, my tcp connection doesn't give this error, but gets time out. Thanks for any help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259199,259199#msg-259199 From francis at daoine.org Wed May 27 22:03:21 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 23:03:21 +0100 Subject: can somebody help me to rewrite this? In-Reply-To: <8dde479d3aa05939b20b352a7babab1d.NginxMailingListEnglish@forum.nginx.org> References: <8dde479d3aa05939b20b352a7babab1d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150527220321.GN2957@daoine.org> On Sat, May 23, 2015 at 03:17:55PM -0400, escavern wrote: Hi there, > I need to rewrite from: > www.mywebsite.com/showthread.php?t=123456 > > To > > www.mywebsite.com/forum/showthread.php?t=123456 If the request is "/showthread.php", then redirect the client to the same url but with "/forum" prepended -- it looks like that is what you want? location = /showthread.php { return 301 /forum$request_uri; } Then configure your nginx-php interaction according to the application documentation for it to be in a "subdirectory". Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 27 22:15:43 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 23:15:43 +0100 Subject: encoded url In-Reply-To: References: Message-ID: <20150527221543.GO2957@daoine.org> On Sat, May 23, 2015 at 01:56:32PM -0400, pierob83 wrote: Hi there, > I've nginx 1.2.5 and I've an issue about encoded URLs. I've not tested this on 1.2.5; but the concept should remain the same. > Is there a way to make nginx accepts URL like the following: > > http://www.mywebsite.com/image.php%3Fid%3D12345 > > as equivalent of the following? > > http://www.mywebsite.com/image.php?id=12345 nginx won't directly accept the two as equivalent, because they are not, in important ways. There are two approaches you can take. * redirect the client to the correct url, and let them request that * proxy_pass the correct url back into the same nginx instance I do not know the full correct nginx syntax; but if you are happy that there will not be other encoded characters in the request that must remain encoded for it to be a valid url, you could try matching on the encoded ? in the request, and using the unencoded version in the next step. That is: location ~ \? { return 301 $uri; } or location ~ \? { proxy_pass http://[your bit here]$uri; proxy_set_header Host $host; } where [your bit here] is an ip:port that corresponds to this server's "listen" directive, and the proxy_set_header bit is to ensure that this server's server_name matches (and the right thing is used in any response headers). Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 27 22:21:22 2015 From: francis at daoine.org (Francis Daly) Date: Wed, 27 May 2015 23:21:22 +0100 Subject: sock error In-Reply-To: <33d149c0aa3a8ed914bea4a6bf5e9e43.NginxMailingListEnglish@forum.nginx.org> References: <33d149c0aa3a8ed914bea4a6bf5e9e43.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150527222122.GP2957@daoine.org> On Wed, May 27, 2015 at 05:55:23PM -0400, birimblongas wrote: Hi there, > But looking at nginx error log, i get numerous: > > [error] 1837#0: *263157 connect() to unix:/tmp/.app.sock failed (11: > Resource temporarily unavailable) while connecting to upstream, > > What could it be? What causes this? nginx is configured to talk to something else - the rails/unicorn server - using proxy_pass or something similar. That "something else" is not listening in the place where nginx is configured to look for it. See where your unicorn server is listening. Point your nginx at that place. f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Wed May 27 23:14:02 2015 From: nginx-forum at nginx.us (birimblongas) Date: Wed, 27 May 2015 19:14:02 -0400 Subject: sock error In-Reply-To: <20150527222122.GP2957@daoine.org> References: <20150527222122.GP2957@daoine.org> Message-ID: Thanks, but both are pointing to the same place. I don't get the error all the time, just some times (more than what would be acceptable, but not all the time) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259199,259204#msg-259204 From nobody at nospam.hostisimo.com Thu May 28 04:49:27 2015 From: nobody at nospam.hostisimo.com (Mike Wright) Date: Wed, 27 May 2015 21:49:27 -0700 Subject: noob needs help with alias locations and php Message-ID: <55669E57.6000405@nospam.hostisimo.com> Hi all, Been going in circles for the third day now. The server handles files and php scripts as long as they are at or beneath the document root. Here are the two problems I haven't been able to figure out: Directory index of ... is forbidden Can't get to docs not beneath the document_root. Example of places I want to go: non-root targets: as: /home/mike/Movies/index.html /movies/index.html 200 /home/mike/Movies/index.php /movies/index.php "File not found." /home/mike/Movies/1/index.html /movies/1/index.html 404 /home/mike/Movies/1/index.php /movies/1/index.php "File not found." File not found : "Primary script unknown"; 404 : the log shows the correct GET path; A lot of my experiments end up with 301 : endless redirects; http { index index.html index.php; ... server { listen 80; server_name localhost lo; root /var/www/sites/localhost/www; location / { try_files $uri $uri/ /index.html /index.php; } location /movies { alias /home/mike/Movies/; } location ~ \.php$ { root /var/www/sites/localhost/www; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } If I can get past these hurdles I'll probably (hopefully) be able to figure most of the rest of this stuff out. Tips, pointers appreciated. Helper? Thanks, Mike Wright From nginx-forum at nginx.us Thu May 28 06:33:38 2015 From: nginx-forum at nginx.us (addictofnginx) Date: Thu, 28 May 2015 02:33:38 -0400 Subject: "[emerg]: bind() to 0.0.0.0:80" and relation between 'logs' directory and 'pid' direction Message-ID: <7e85512f5ceecd72024772bc53fbf5b2.NginxMailingListEnglish@forum.nginx.org> Hello, I had compiled my Nginx using '.configure', 'make' and 'make install' commands. nginx.pid file occurs on 'logs' directory as default. If I change set 'pid' directive (e.g. pid sbin/nginx.pid) on my nginx.conf file then I cannot restart Nginx and I get following message at second restart service. [emerg]: bind() to 0.0.0.0:80 failed (98: Address already in use) And if I change the direction as 'pid logs/nginx.pid' everything is perfect! Would it be possible to change Nginx default settings? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259208,259208#msg-259208 From nginx-forum at nginx.us Thu May 28 06:35:10 2015 From: nginx-forum at nginx.us (sampy) Date: Thu, 28 May 2015 02:35:10 -0400 Subject: Nginx and Websphere In-Reply-To: <20150527133304.GK2957@daoine.org> References: <20150527133304.GK2957@daoine.org> Message-ID: Hi, Francis Daly Wrote: ------------------------------------------------------- > On Wed, May 27, 2015 at 07:27:25AM -0400, sampy wrote: > > Hi there, > > > What does "doesn't work" mean? > > > > web browser don't show the web and appears the error "webint don't > exists > > in DNS", but query arrives to the balanced server > > You see "webint", but you include "proxy_set_header Host $host;" in > your config? > > That is unexpected to me. > > Could you show the response to > > curl -i http://web.domain.com/ > I just remove "proxy_set_header Host $host" And the result for curl: HTTP/1.1 302 Found Server: nginx/1.8.0 Date: Thu, 28 May 2015 06:27:41 GMT Content-Type: text/html Content-Length: 0 Connection: keep-alive X-Powered-By: Servlet/3.0 Location: http://webint:9080/WebApp/main.jsf Content-Language: es-ES Set-Cookie: JSESSIONID=0000EOXejCoK14218Xm2cCtzxdd:-1; Path=/; HttpOnly Expires: Thu, 01 Dec 1994 16:00:00 GMT Cache-Control: no-cache="set-cookie, set-cookie2" > from your client machine? Perhaps there is more than one redirection > happening. > > > What response did you want to get instead? > > > > Oviously the web > > What http response? A 301 redirect to a different url; or a 200 with > some specific content? > > (Assume that I don't know anything about WebSphere.) > > > What response do you get if you do > > > > curl -H Host:web.domain.com -i > http://wasint-1.domain.com:9080/WebApp > > > > from the nginx server? > > > > HTTP/1.1 302 Found > > > > X-Powered-By: Servlet/3.0 > > Location: http://web.domain.com:9080/WebApp/ > > Content-Language: es-ES > > Content-Length: 0 > > Ok - after you show the response from the new curl command above -- > the one that you make of your nginx server directly -- it might become > clear whether that Location: header is being modified, or what it > should > be modified to be. > > http://nginx.org/r/proxy_redirect may be useful. (Or just removing the > "proxy_set_header Host $host;".) > > > Does that match one of the previous two answers? > > > > Yes, but don't. Let me explain. It appears that the proxy_pass > works, > > because adds the port and /WebApp, but don't show the web that is > served in > > the host. > > One request gets one response. > > Take the requests one at a time, and the path to the desired end state > may become clear. > > (For example: I suspect that you may want to change the proxy_pass to > be > just to "http://webint", and you may want an extra "location=/{return > 301 > /WebApp/;}", but the response from the first few requests will > probably > show whether or not that is useful.) > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259209#msg-259209 From nginx-forum at nginx.us Thu May 28 07:05:49 2015 From: nginx-forum at nginx.us (addictofnginx) Date: Thu, 28 May 2015 03:05:49 -0400 Subject: "[emerg]: bind() to 0.0.0.0:80" and relation between 'logs' directory and 'pid' direction In-Reply-To: <7e85512f5ceecd72024772bc53fbf5b2.NginxMailingListEnglish@forum.nginx.org> References: <7e85512f5ceecd72024772bc53fbf5b2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <014ea160bc4f1b52fe373812fce1483a.NginxMailingListEnglish@forum.nginx.org> I've found the reason. My init script that is found on /etc/init.d/nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259208,259211#msg-259211 From francis at daoine.org Thu May 28 07:47:49 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 28 May 2015 08:47:49 +0100 Subject: Nginx and Websphere In-Reply-To: References: <20150527133304.GK2957@daoine.org> Message-ID: <20150528074749.GQ2957@daoine.org> On Thu, May 28, 2015 at 02:35:10AM -0400, sampy wrote: > Francis Daly Wrote: Hi there, > > Could you show the response to > > > > curl -i http://web.domain.com/ > > > I just remove "proxy_set_header Host $host" > > And the result for curl: > > HTTP/1.1 302 Found > Location: http://webint:9080/WebApp/main.jsf Use "proxy_redirect" to tell nginx to convert the start of that back to what you want it to be. Possibly proxy_redirect http://webint:9080/ /; or possibly proxy_redirect http://webint:9080/WebApp/ /; depending on what your full plan is. Then repeat the "curl" and see what is different. And see if it works from your normal browser. f -- Francis Daly francis at daoine.org From francis at daoine.org Thu May 28 08:02:53 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 28 May 2015 09:02:53 +0100 Subject: noob needs help with alias locations and php In-Reply-To: <55669E57.6000405@nospam.hostisimo.com> References: <55669E57.6000405@nospam.hostisimo.com> Message-ID: <20150528080253.GR2957@daoine.org> On Wed, May 27, 2015 at 09:49:27PM -0700, Mike Wright wrote: Hi there, In general in nginx, one request is handled in one location. Only the configuration in, or inherited into, that location, matters. It you use rewrite-module directives, things are a bit more complicated. There is lots at http://nginx.org/en/docs/; it may be hard to find the one piece of information you want, but it is probably there somewhere. > The server handles files and php scripts as long as they are at or > beneath the document root. Here are the two problems I haven't been > able to figure out: > Directory index of ... is forbidden What one request do you make that leads to that problem? What is the one location that handles that request? > Can't get to docs not beneath the document_root. Same questions. > Example of places I want to go: > > non-root targets: as: > /home/mike/Movies/index.html /movies/index.html 200 > /home/mike/Movies/index.php /movies/index.php "File not found." > /home/mike/Movies/1/index.html /movies/1/index.html 404 > /home/mike/Movies/1/index.php /movies/1/index.php "File not found." > > File not found : "Primary script unknown"; > 404 : the log shows the correct GET path; > A lot of my experiments end up with 301 : endless redirects; > > http { > index index.html index.php; > ... > > server { > listen 80; > server_name localhost lo; > root /var/www/sites/localhost/www; > > location / { > try_files $uri $uri/ /index.html /index.php; > } > The request "/movies/index.html" will be handled in this location: > location /movies { > alias /home/mike/Movies/; > } (Usually, it is good if the number of /s at the end of "alias" and and the end of "location" are the same.) The request "/movies/index.php" will be handled in this location: > location ~ \.php$ { > root /var/www/sites/localhost/www; > include fastcgi_params; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > } > } Enable more logging; or "tcpdump", to see what nginx sends to your fastcgi server. "/movies/index.php" is unlikely to be correctly handled here. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Thu May 28 08:15:08 2015 From: nginx-forum at nginx.us (sampy) Date: Thu, 28 May 2015 04:15:08 -0400 Subject: Nginx and Websphere In-Reply-To: <20150528074749.GQ2957@daoine.org> References: <20150528074749.GQ2957@daoine.org> Message-ID: <5d8e37ce9daf6a82f52ca935630506ab.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Thu, May 28, 2015 at 02:35:10AM -0400, sampy wrote: > > Francis Daly Wrote: > > Hi there, > > > > Could you show the response to > > > > > > curl -i http://web.domain.com/ > > > > > I just remove "proxy_set_header Host $host" > > > > And the result for curl: > > > > HTTP/1.1 302 Found > > > Location: http://webint:9080/WebApp/main.jsf > > Use "proxy_redirect" to tell nginx to convert the start of that back > to > what you want it to be. > > Possibly > > proxy_redirect http://webint:9080/ /; > > or possibly > > proxy_redirect http://webint:9080/WebApp/ /; > > depending on what your full plan is. > > Then repeat the "curl" and see what is different. > > And see if it works from your normal browser. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I changed both possibilities and the "curl" shows the default web of nginx. regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259214#msg-259214 From nginx-forum at nginx.us Thu May 28 09:49:12 2015 From: nginx-forum at nginx.us (s_n) Date: Thu, 28 May 2015 05:49:12 -0400 Subject: ip_hash in active_active nginx setup In-Reply-To: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> References: <9d5f388672a68589789de890c5ec8d0a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, based on your information we implemented the described setup and did not encounter any stickiness problems so far. Cutting out F5 is sadly a decision that is not under my control. I agree that it would be a simpler setup with just nginx loadbalancing. Thanks all. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259046,259217#msg-259217 From francis at daoine.org Thu May 28 11:32:28 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 28 May 2015 12:32:28 +0100 Subject: Nginx and Websphere In-Reply-To: <5d8e37ce9daf6a82f52ca935630506ab.NginxMailingListEnglish@forum.nginx.org> References: <20150528074749.GQ2957@daoine.org> <5d8e37ce9daf6a82f52ca935630506ab.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150528113228.GS2957@daoine.org> On Thu, May 28, 2015 at 04:15:08AM -0400, sampy wrote: Hi there, > I changed both possibilities and the "curl" shows the default web of nginx. I'm not sure what state things are in now. Can you copy-paste your current config, plus your curl requests and responses? curl -i http://web.domain.com/ curl -i http://web.domain.com/WebApp/ Thanks, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Thu May 28 12:21:53 2015 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 28 May 2015 15:21:53 +0300 Subject: sock error In-Reply-To: <33d149c0aa3a8ed914bea4a6bf5e9e43.NginxMailingListEnglish@forum.nginx.org> References: <33d149c0aa3a8ed914bea4a6bf5e9e43.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150528122152.GB26357@mdounin.ru> Hello! On Wed, May 27, 2015 at 05:55:23PM -0400, birimblongas wrote: > Hi, i have a rails app using unicorn + nginx. > Last month my app started to get really slow and giving me error 502. > Unicorn log doesn't show nothing, either my rails app log. > But looking at nginx error log, i get numerous: > > [error] 1837#0: *263157 connect() to unix:/tmp/.app.sock failed (11: > Resource temporarily unavailable) while connecting to upstream, > > > What could it be? What causes this? > > If I change from socket to tcp connection, my tcp connection doesn't give > this error, but gets time out. Both the error and timeouts indicate that your backend is overloaded and can't cope with load, so it's listening socket queue overflows and it starts rejecting connections. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu May 28 15:50:30 2015 From: nginx-forum at nginx.us (sampy) Date: Thu, 28 May 2015 11:50:30 -0400 Subject: Nginx and Websphere In-Reply-To: <20150528113228.GS2957@daoine.org> References: <20150528113228.GS2957@daoine.org> Message-ID: <6ce1bd8b7bc011301b0ad3346a914250.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Thu, May 28, 2015 at 04:15:08AM -0400, sampy wrote: > > Hi there, > > > I changed both possibilities and the "curl" shows the default web of > nginx. > > I'm not sure what state things are in now. > > Can you copy-paste your current config, plus your curl requests and upstream webint { ip_hash; server wasint-1.carreras.sa; server wasint-2.carreras.sa:9080; } server { listen 80; server_name web.domain.com; location / { #proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; #proxy_redirect http://webint/WebApp /; proxy_redirect http://webint /; } } > responses? > > curl -i http://web.domain.com/ result: default nginx web > curl -i http://web.domain.com/WebApp/ result: HTTP/1.1 404 Not Found Server: nginx/1.8.0 Date: Thu, 28 May 2015 15:50:30 GMT Content-Type: text/html Content-Length: 168 Connection: keep-alive 404 Not Found

404 Not Found


nginx/1.8.0
Thanks > > Thanks, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259231#msg-259231 From kworthington at gmail.com Thu May 28 16:45:22 2015 From: kworthington at gmail.com (Kevin Worthington) Date: Thu, 28 May 2015 12:45:22 -0400 Subject: nginx-1.9.1 In-Reply-To: <20150526141714.GP11860@mdounin.ru> References: <20150526141714.GP11860@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.9.1 for Windows http://goo.gl/tWYsX0 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, May 26, 2015 at 10:17 AM, Maxim Dounin wrote: > Changes with nginx 1.9.1 26 May > 2015 > > *) Change: now SSLv3 protocol is disabled by default. > > *) Change: some long deprecated directives are not supported anymore. > > *) Feature: the "reuseport" parameter of the "listen" directive. > Thanks to Sepherosa Ziehau and Yingqi Lu. > > *) Feature: the $upstream_connect_time variable. > > *) Bugfix: in the "hash" directive on big-endian platforms. > > *) Bugfix: nginx might fail to start on some old Linux variants; the > bug > had appeared in 1.7.11. > > *) Bugfix: in IP address parsing. > Thanks to Sergey Polovko. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu May 28 18:00:00 2015 From: francis at daoine.org (Francis Daly) Date: Thu, 28 May 2015 19:00:00 +0100 Subject: Nginx and Websphere In-Reply-To: <6ce1bd8b7bc011301b0ad3346a914250.NginxMailingListEnglish@forum.nginx.org> References: <20150528113228.GS2957@daoine.org> <6ce1bd8b7bc011301b0ad3346a914250.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150528180000.GT2957@daoine.org> On Thu, May 28, 2015 at 11:50:30AM -0400, sampy wrote: > Francis Daly Wrote: Hi there, > upstream webint { > ip_hash; > server wasint-1.carreras.sa; Just to confirm: no ":9080" on that one? It probably means that a second proxy_redirect will be needed; added below. > server wasint-2.carreras.sa:9080; > } > > server { > > listen 80; > server_name web.domain.com; > > > location / { > #proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > #proxy_redirect http://webint/WebApp /; > proxy_redirect http://webint /; There is no "proxy_pass" here, so the rest of the configuration is effectively unused and the request is served from the filesystem -- which explains the responses you get. Perhaps try instead location / { proxy_pass http://webint; proxy_redirect http://webint/ /; proxy_redirect http://webint:9080/ /; } and add the other bits if they are needed. > > curl -i http://web.domain.com/ > result: default nginx web That should become "whatever the webapp returns for a / request". If it is not a redirect to /WebApp/, it can be fixed later. > > curl -i http://web.domain.com/WebApp/ > result: That should become a sensible response -- hopefully a redirect to http://web.domain.com/WebApp/main.jsf. If you do get that, they you can try curl -i http://web.domain.com/WebApp/main.jsf to see what happens next; or just try it in your normal browser. f -- Francis Daly francis at daoine.org From pchychi at gmail.com Thu May 28 18:34:24 2015 From: pchychi at gmail.com (Payam Chychi) Date: Thu, 28 May 2015 11:34:24 -0700 Subject: Nginx and Websphere In-Reply-To: <6ce1bd8b7bc011301b0ad3346a914250.NginxMailingListEnglish@forum.nginx.org> References: <20150528113228.GS2957@daoine.org> <6ce1bd8b7bc011301b0ad3346a914250.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2D7C515F63E648BDB405291014B6EF0F@localhost> Why redirect instead of a proxy_pass? -- Payam Chychi Network Engineer / Security Specialist On Thursday, May 28, 2015 at 8:50 AM, sampy wrote: > Francis Daly Wrote: > ------------------------------------------------------- > > On Thu, May 28, 2015 at 04:15:08AM -0400, sampy wrote: > > > > Hi there, > > > > > I changed both possibilities and the "curl" shows the default web of > > nginx. > > > > I'm not sure what state things are in now. > > > > Can you copy-paste your current config, plus your curl requests and > > upstream webint { > ip_hash; > server wasint-1.carreras.sa; > server wasint-2.carreras.sa:9080; > } > > server { > > listen 80; > server_name web.domain.com; > > > location / { > #proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > #proxy_redirect http://webint/WebApp /; > proxy_redirect http://webint /; > } > > } > > > > responses? > > > > curl -i http://web.domain.com/ > result: default nginx web > > curl -i http://web.domain.com/WebApp/ > > result: > HTTP/1.1 404 Not Found > Server: nginx/1.8.0 > Date: Thu, 28 May 2015 15:50:30 GMT > Content-Type: text/html > Content-Length: 168 > Connection: keep-alive > > > 404 Not Found > >

404 Not Found

>
nginx/1.8.0
> > > > Thanks > > > > Thanks, > > > > f > > -- > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259231#msg-259231 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Thu May 28 18:35:11 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Thu, 28 May 2015 14:35:11 -0400 Subject: Nginx Session Draining Message-ID: <3b5b2337b18478963a3d60dcfbf3ad5c.NginxMailingListEnglish@forum.nginx.org> Hi, I read about the Nginx Session Draining feature. Looks like it is also available in the non commercial version. http://nginx.com/products/session-persistence/#session draining But, it does not tell me how to configure this in the upstream block. Any help? Thanks... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259238,259238#msg-259238 From vbart at nginx.com Thu May 28 20:49:38 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 28 May 2015 23:49:38 +0300 Subject: Nginx Session Draining In-Reply-To: <3b5b2337b18478963a3d60dcfbf3ad5c.NginxMailingListEnglish@forum.nginx.org> References: <3b5b2337b18478963a3d60dcfbf3ad5c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <11652010.vuWeuVYTl4@vbart-workstation> On Thursday 28 May 2015 14:35:11 nginxsantos wrote: > Hi, > > I read about the Nginx Session Draining feature. Looks like it is also > available in the non commercial version. > > http://nginx.com/products/session-persistence/#session draining No, it's only available in NGINX Plus. > > But, it does not tell me how to configure this in the upstream block. > > Any help? > > Thanks... > See here for the reference: http://nginx.org/en/docs/http/ngx_http_upstream_conf_module.html#drain wbr, Valentin V. Bartenev From oyljerry at gmail.com Fri May 29 02:48:42 2015 From: oyljerry at gmail.com (Jerry OELoo) Date: Fri, 29 May 2015 10:48:42 +0800 Subject: [Nginx] How to support file upload in Nginx 1.8 Message-ID: Hi I am using Nginx 1.8 version, and Is it default support file upload, I found there is clientbodyinfileonly in Nginx, so is it official method to support file upload>? Thanks~ -- Rejoice,I Desire! From nginx-forum at nginx.us Fri May 29 06:01:00 2015 From: nginx-forum at nginx.us (sampy) Date: Fri, 29 May 2015 02:01:00 -0400 Subject: Nginx and Websphere In-Reply-To: <20150528180000.GT2957@daoine.org> References: <20150528180000.GT2957@daoine.org> Message-ID: <779edc6890bb6fc61d8d091635e09504.NginxMailingListEnglish@forum.nginx.org> Hi there! Now it's near to work! With this configuration: upstream webint { #ip_hash; server wasint-1.domain.com:9080; server wasint-2.domain.com:9080; } server { listen 80; server_name web.domain.com; location /WebApp { #proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://webint; proxy_redirect http://webint/ /; proxy_redirect http://webint:9080/ /; } } executing "curl -i http://web.domain.com/WebApp/main.js" Shows the web!! And y a normal browser typing all the url "http://web.domain.com/WebApp/main.jsf" shows the correct web!! awesome! Now I want to modify the base of this configuration to type only http://web.domain.com and redirect to the web I want. Thanks for all!!! I can feel that we are near to resolve that :-) Francis Daly Wrote: ------------------------------------------------------- > On Thu, May 28, 2015 at 11:50:30AM -0400, sampy wrote: > > Francis Daly Wrote: > > Hi there, > > > upstream webint { > > ip_hash; > > server wasint-1.carreras.sa; > > Just to confirm: no ":9080" on that one? It probably means that a > second > proxy_redirect will be needed; added below. > > > server wasint-2.carreras.sa:9080; > > } > > > > server { > > > > listen 80; > > server_name web.domain.com; > > > > > > location / { > > #proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > #proxy_redirect http://webint/WebApp /; > > proxy_redirect http://webint /; > > There is no "proxy_pass" here, so the rest of the configuration is > effectively unused and the request is served from the filesystem -- > which explains the responses you get. > > Perhaps try instead > > location / { > proxy_pass http://webint; > proxy_redirect http://webint/ /; > proxy_redirect http://webint:9080/ /; > } > > and add the other bits if they are needed. > > > > curl -i http://web.domain.com/ > > result: default nginx web > > That should become "whatever the webapp returns for a / request". If > it > is not a redirect to /WebApp/, it can be fixed later. > > > > curl -i http://web.domain.com/WebApp/ > > result: > > That should become a sensible response -- hopefully a redirect to > http://web.domain.com/WebApp/main.jsf. > > If you do get that, they you can try > > curl -i http://web.domain.com/WebApp/main.jsf > > to see what happens next; or just try it in your normal browser. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259253#msg-259253 From rvrv7575 at yahoo.com Fri May 29 07:14:51 2015 From: rvrv7575 at yahoo.com (Rv Rv) Date: Fri, 29 May 2015 07:14:51 +0000 (UTC) Subject: Supporting special characters in URI for location block Message-ID: <186604231.629866.1432883691904.JavaMail.yahoo@mail.yahoo.com> I have a URI with spanish characters that needs to be handled in location block specific to it e.g. I want to do?location {} However, nginx does not seem to match URI in location block ?with spanish (or any other characters including chinese). Is this functionality supported?Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Fri May 29 07:33:13 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 29 May 2015 08:33:13 +0100 Subject: Nginx and Websphere In-Reply-To: <779edc6890bb6fc61d8d091635e09504.NginxMailingListEnglish@forum.nginx.org> References: <20150528180000.GT2957@daoine.org> <779edc6890bb6fc61d8d091635e09504.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20150529073313.GU2957@daoine.org> On Fri, May 29, 2015 at 02:01:00AM -0400, sampy wrote: Hi there, > Now it's near to work! Good news. > With this configuration: > > upstream webint { > #ip_hash; > server wasint-1.domain.com:9080; > server wasint-2.domain.com:9080; > } > > server { > > listen 80; > server_name web.domain.com; > > location /WebApp { > #proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_pass http://webint; > proxy_redirect http://webint/ /; > proxy_redirect http://webint:9080/ /; > } You probably do not need "proxy_redirect http://webint/ /;", because what the upstream sends back probably always includes the :9080. But it does no extra harm to leave it there. > executing "curl -i http://web.domain.com/WebApp/main.js" > > Shows the web!! > > And y a normal browser typing all the url > "http://web.domain.com/WebApp/main.jsf" shows the correct web!! awesome! > > Now I want to modify the base of this configuration to type only > http://web.domain.com and redirect to the web I want. Add an extra location block for that request: location = / { return 301 /WebApp/; } And then issue the "curl" command for the url you want; when you get a 301 redirect, issuer a curl command for the Location: header returned. If the Location: header looks wrong -- does not point at something in web.domain.com -- then you'll have identified which request leads to the unwanted response. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri May 29 09:32:11 2015 From: nginx-forum at nginx.us (nginxsantos) Date: Fri, 29 May 2015 05:32:11 -0400 Subject: Nginx Session Draining In-Reply-To: <11652010.vuWeuVYTl4@vbart-workstation> References: <11652010.vuWeuVYTl4@vbart-workstation> Message-ID: <0c1e0b90ee68d1ab574de1cdfc14ac36.NginxMailingListEnglish@forum.nginx.org> Hi this module is a part of the commercial subscription. But, looks like the open source version also supports session draining configuration I am trying to figure out how to do that any help? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259238,259263#msg-259263 From igor at sysoev.ru Fri May 29 09:49:33 2015 From: igor at sysoev.ru (Igor Sysoev) Date: Fri, 29 May 2015 12:49:33 +0300 Subject: Supporting special characters in URI for location block In-Reply-To: <186604231.629866.1432883691904.JavaMail.yahoo@mail.yahoo.com> References: <186604231.629866.1432883691904.JavaMail.yahoo@mail.yahoo.com> Message-ID: <25AFBED6-A726-4A5C-B73A-357CDFB3983F@sysoev.ru> On 29 May 2015, at 10:14, Rv Rv wrote: > I have a URI with spanish characters that needs to be handled in location block specific to it e.g. I want to do > location { > } > > However, nginx does not seem to match URI in location block with spanish (or any other characters including chinese). Is this functionality supported? > Thanks Both URI and configuration should be in UTF-8. Then location /a?o/ { } will match "GET /a%C3%B1o/..." -- Igor Sysoev http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri May 29 16:51:51 2015 From: nginx-forum at nginx.us (sampy) Date: Fri, 29 May 2015 12:51:51 -0400 Subject: Nginx and Websphere In-Reply-To: <20150529073313.GU2957@daoine.org> References: <20150529073313.GU2957@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > On Fri, May 29, 2015 at 02:01:00AM -0400, sampy wrote: > > Hi there, > > > Now it's near to work! > > Good news. > > > With this configuration: > > > > upstream webint { > > #ip_hash; > > server wasint-1.domain.com:9080; > > server wasint-2.domain.com:9080; > > } > > > > server { > > > > listen 80; > > server_name web.domain.com; > > > > location /WebApp { > > #proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > proxy_pass http://webint; > > proxy_redirect http://webint/ /; > > proxy_redirect http://webint:9080/ /; > > } > > You probably do not need "proxy_redirect http://webint/ /;", because > what the upstream sends back probably always includes the :9080. But > it > does no extra harm to leave it there. > > > executing "curl -i http://web.domain.com/WebApp/main.js" > > > > Shows the web!! > > > > And y a normal browser typing all the url > > "http://web.domain.com/WebApp/main.jsf" shows the correct web!! > awesome! > > > > Now I want to modify the base of this configuration to type only > > http://web.domain.com and redirect to the web I want. > > Add an extra location block for that request: > > location = / { return 301 /WebApp/; } > > And then issue the "curl" command for the url you want; when you get a > 301 redirect, issuer a curl command for the Location: header returned. > > If the Location: header looks wrong -- does not point at something in > web.domain.com -- then you'll have identified which request leads to > the unwanted response. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx hi there. I'm happy because with web.domain.com/WebApp works fine. But when I change LOCATION from: location /WebApp { upstream balancer { server server1.domain.com:9080; server server2.domain.com:9080; } location /WebApp { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://balancer/WebApp; proxy_redirect http://balancer:9080/ /; } } } to: upstream balancer { server server1.domain.com:9080; server server2.domain.com:9080; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://balancer/WebApp; proxy_redirect http://balancer:9080/ /; } } stop working. "curl" sees the 302 redirection in both cases, but the web is not showed in the second case. I think I forget something or I have some mistake in syntax. Regards Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259267#msg-259267 From nginx-forum at nginx.us Fri May 29 16:53:12 2015 From: nginx-forum at nginx.us (sampy) Date: Fri, 29 May 2015 12:53:12 -0400 Subject: Nginx and Websphere In-Reply-To: <2D7C515F63E648BDB405291014B6EF0F@localhost> References: <2D7C515F63E648BDB405291014B6EF0F@localhost> Message-ID: <41a22b7a3352fb818c259680fdcfda30.NginxMailingListEnglish@forum.nginx.org> unclepieman Wrote: ------------------------------------------------------- > Why redirect instead of a proxy_pass? Yes, you're right. Now with proxy_pass works with some minor errors > > -- > Payam Chychi > Network Engineer / Security Specialist > > > On Thursday, May 28, 2015 at 8:50 AM, sampy wrote: > > > Francis Daly Wrote: > > ------------------------------------------------------- > > > On Thu, May 28, 2015 at 04:15:08AM -0400, sampy wrote: > > > > > > Hi there, > > > > > > > I changed both possibilities and the "curl" shows the default > web of > > > nginx. > > > > > > I'm not sure what state things are in now. > > > > > > Can you copy-paste your current config, plus your curl requests > and > > > > upstream webint { > > ip_hash; > > server wasint-1.carreras.sa; > > server wasint-2.carreras.sa:9080; > > } > > > > server { > > > > listen 80; > > server_name web.domain.com; > > > > > > location / { > > #proxy_set_header Host $host; > > proxy_set_header X-Real-IP $remote_addr; > > proxy_set_header X-Forwarded-For > > $proxy_add_x_forwarded_for; > > #proxy_redirect http://webint/WebApp /; > > proxy_redirect http://webint /; > > } > > > > } > > > > > > > responses? > > > > > > curl -i http://web.domain.com/ > > result: default nginx web > > > curl -i http://web.domain.com/WebApp/ > > > > result: > > HTTP/1.1 404 Not Found > > Server: nginx/1.8.0 > > Date: Thu, 28 May 2015 15:50:30 GMT > > Content-Type: text/html > > Content-Length: 168 > > Connection: keep-alive > > > > > > 404 Not Found > > > >

404 Not Found

> >
nginx/1.8.0
> > > > > > > > Thanks > > > > > > Thanks, > > > > > > f > > > -- > > > Francis Daly francis at daoine.org > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,259127,259231#msg-259231 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259127,259268#msg-259268 From vbart at nginx.com Fri May 29 20:24:01 2015 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 29 May 2015 23:24:01 +0300 Subject: Nginx Session Draining In-Reply-To: <0c1e0b90ee68d1ab574de1cdfc14ac36.NginxMailingListEnglish@forum.nginx.org> References: <11652010.vuWeuVYTl4@vbart-workstation> <0c1e0b90ee68d1ab574de1cdfc14ac36.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7876099.oTJbMCoT3z@vbart-workstation> On Friday 29 May 2015 05:32:11 nginxsantos wrote: > Hi this module is a part of the commercial subscription. > But, looks like the open source version also supports session draining > configuration > I am trying to figure out how to do that any help? > No. It's not supported in the free version. wbr, Valentin V. Bartenev From francis at daoine.org Fri May 29 21:44:59 2015 From: francis at daoine.org (Francis Daly) Date: Fri, 29 May 2015 22:44:59 +0100 Subject: Nginx and Websphere In-Reply-To: References: <20150529073313.GU2957@daoine.org> Message-ID: <20150529214459.GV2957@daoine.org> On Fri, May 29, 2015 at 12:51:51PM -0400, sampy wrote: > Francis Daly Wrote: > > > location /WebApp { > > > proxy_pass http://webint; > > > proxy_redirect http://webint:9080/ /; > > > } > But when I change LOCATION from: > location /WebApp { > proxy_pass http://balancer/WebApp; > proxy_redirect http://balancer:9080/ /; > } > to: > location / { > proxy_pass http://balancer/WebApp; > proxy_redirect http://balancer:9080/ /; > } > stop working. "curl" sees the 302 redirection in both cases, but the web is > not showed in the second case. I think I forget something or I have some > mistake in syntax. Look at the proxy_pass in the very first one, the one that worked. It is just http://webint -- no / or anything after it. Whether you use "location /" or "location /WebApp", just use "proxy_pass http://balancer;" http://nginx.org/r/proxy_pass for the details. f -- Francis Daly francis at daoine.org From ravikiran at gocoop.com Sat May 30 02:42:10 2015 From: ravikiran at gocoop.com (Ravikiran Siddhanti) Date: Sat, 30 May 2015 08:12:10 +0530 Subject: Nginx and Websphere In-Reply-To: <20150529214459.GV2957@daoine.org> References: <20150529073313.GU2957@daoine.org> <20150529214459.GV2957@daoine.org> Message-ID: Hi Francis, We have been nginx with Php fpm for the last one year. In the last 2 months we have been seeing php fpm getting overloaded sometimes and eventually leading to 502/500/504 errors returned by nginx. Could you please help us out? Let me what details you would need I would send them in the next mail. Thank you. On 30 May 2015 03:15, "Francis Daly" wrote: > On Fri, May 29, 2015 at 12:51:51PM -0400, sampy wrote: > > Francis Daly Wrote: > > > > > location /WebApp { > > > > proxy_pass http://webint; > > > > proxy_redirect http://webint:9080/ /; > > > > } > > > But when I change LOCATION from: > > > location /WebApp { > > proxy_pass http://balancer/WebApp; > > proxy_redirect http://balancer:9080/ /; > > } > > > to: > > > location / { > > proxy_pass http://balancer/WebApp; > > proxy_redirect http://balancer:9080/ /; > > } > > > stop working. "curl" sees the 302 redirection in both cases, but the web > is > > not showed in the second case. I think I forget something or I have some > > mistake in syntax. > > Look at the proxy_pass in the very first one, the one that worked. > > It is just http://webint -- no / or anything after it. > > Whether you use "location /" or "location /WebApp", just use > "proxy_pass http://balancer;" > > http://nginx.org/r/proxy_pass for the details. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat May 30 11:43:04 2015 From: nginx-forum at nginx.us (George) Date: Sat, 30 May 2015 07:43:04 -0400 Subject: Nginx virtual host traffic status module In-Reply-To: <312d0dc98f43345c2a571ed7218d5d2@cvweb04.wmail.nhnsystem.com> References: <312d0dc98f43345c2a571ed7218d5d2@cvweb04.wmail.nhnsystem.com> Message-ID: Thanks for the excellent module, I have it working for my builds https://community.centminmod.com/threads/centmin-mod-nginx-live-vhost-traffic-statistics-preview-discussion.3022/ :D Would be great if there's some documentation on how to customise the CSS in html mode https://github.com/vozlt/nginx-module-vts/issues/13 cheers George Posted at Nginx Forum: http://forum.nginx.org/read.php?2,256699,259274#msg-259274 From nginx-forum at nginx.us Sat May 30 20:00:22 2015 From: nginx-forum at nginx.us (z_kamikimo) Date: Sat, 30 May 2015 16:00:22 -0400 Subject: Compiling Nginx on Windows 7 In-Reply-To: References: Message-ID: <598d6667bf6af291d634d966adb5731f.NginxMailingListEnglish@forum.nginx.org> Im experiencing issues with compiling Nginx on Windows 7, every thing goes good until nmake -f objs/Makefile. I get the following error Assembling: tmp32\sha1-586.asm tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 10.0\VC\BI N\ml.EXE"' : return code '0x1' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 10.0\VC\BI N\nmake.exe"' : return code '0x2' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 10.0\VC\BI N\nmake.exe"' : return code '0x2' Stop. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237923,259275#msg-259275 From nginx-forum at nginx.us Sat May 30 20:07:20 2015 From: nginx-forum at nginx.us (z_kamikimo) Date: Sat, 30 May 2015 16:07:20 -0400 Subject: nginx-rtmp-compile-for-windows error??? help Message-ID: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> Im experiencing issues with compiling Nginx on Windows 7, every thing goes good until nmake -f objs/Makefile. I get the following error Assembling: tmp32\sha1-586.asm tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 10.0\VC\BI N\ml.EXE"' : return code '0x1' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 10.0\VC\BI N\nmake.exe"' : return code '0x2' Stop. NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio 10.0\VC\BI N\nmake.exe"' : return code '0x2' Stop. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,259276#msg-259276 From emailbuilder88 at yahoo.com Sat May 30 21:20:05 2015 From: emailbuilder88 at yahoo.com (E.B.) Date: Sat, 30 May 2015 14:20:05 -0700 Subject: [OT] Cant write across filesystem mounts? Message-ID: <1433020805.84539.YahooMailBasic@web142402.mail.bf1.yahoo.com> Hi I dont think this is specific to nginx but I hope its a good place to ask! When running PHP script through Nginx it writes OK to files on the same disk mount where the PHP file is located but not to the other parts of the system that are on another mount. (well i dont know if its a matter of "same mount" or not, but that is how it is behaving) Example, /tmp is on another mount than the web root. hello world I run this script from CLI (sudo as ANY user including the php user) and it always works fine (writes files in both places). If I access it from a browser the write/touch commands to /tmp fail silently. No AVC from selinux, no PHP or Nginx errors or warnings. /tmp permissions are usual 777. Can someone help me in right direction? From kurt at x64architecture.com Sat May 30 22:40:39 2015 From: kurt at x64architecture.com (Kurt Cancemi) Date: Sat, 30 May 2015 18:40:39 -0400 Subject: Compiling Nginx on Windows 7 In-Reply-To: <598d6667bf6af291d634d966adb5731f.NginxMailingListEnglish@forum.nginx.org> References: <598d6667bf6af291d634d966adb5731f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <935C9C6F-9264-47FC-BED7-15A5FA4AB875@x64architecture.com> That's an OpenSSL asm compile error, you need nasm installed and in your PATH, masm which is included in visual studio is not supported by OpenSSL. Kurt Cancemi https://www.x64architecture.com > On May 30, 2015, at 4:00 PM, z_kamikimo wrote: > > Im experiencing issues with compiling Nginx on Windows 7, every thing goes > good until nmake -f objs/Makefile. > I get the following error > > Assembling: tmp32\sha1-586.asm > tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands > tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\ml.EXE"' : return code '0x1' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237923,259275#msg-259275 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Sat May 30 23:58:44 2015 From: nginx-forum at nginx.us (dethegeek) Date: Sat, 30 May 2015 19:58:44 -0400 Subject: mail proxying Message-ID: Hi I'm setting up nginx as a reverse proxy for a postfix / dovecot setup. My imap server requires STARTTLS usage. Nginx seems to not issue STARTTLS command before forwarding users credentials. Here is the error I found in /var/log/nginx/error.log [error] 928#0: *20 upstream sent invalid response: "* BAD [ALERT] Plaintext authentication not allowed without SSL/TLS, but your client did it anyway. If anyone was listening, the password was exposed. I did not found anything in the documentation to ask nginx to issue STARTTLS command to the upstream server. Is there a way to achieve this ? I did not tried pop3 yet, but I'm expecting the same annoyance. and the same answer; let me know if I'm wrong. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259279#msg-259279 From arut at nginx.com Sun May 31 05:12:23 2015 From: arut at nginx.com (Roman Arutyunyan) Date: Sun, 31 May 2015 08:12:23 +0300 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> References: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> Message-ID: Try to compile nginx without the rtmp first. On 30 May 2015, at 23:07, z_kamikimo wrote: > Im experiencing issues with compiling Nginx on Windows 7, every thing goes > good until nmake -f objs/Makefile. > I get the following error > > Assembling: tmp32\sha1-586.asm > tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands > tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\ml.EXE"' : return code '0x1' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259276,259276#msg-259276 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Roman Arutyunyan From kurt at x64architecture.com Sun May 31 05:37:13 2015 From: kurt at x64architecture.com (Kurt Cancemi) Date: Sun, 31 May 2015 01:37:13 -0400 Subject: nginx-rtmp-compile-for-windows error??? help In-Reply-To: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> References: <41b10799bbc647e809000a296eaad71f.NginxMailingListEnglish@forum.nginx.org> Message-ID: ---------- Forwarded message ---------- From: Kurt Cancemi Date: Sat, May 30, 2015 at 6:40 PM Subject: Re: Compiling Nginx on Windows 7 To: "nginx at nginx.org" That's an OpenSSL asm compile error, you need nasm installed and in your PATH, masm which is included in visual studio is not supported by OpenSSL. Kurt Cancemi https://www.x64architecture.com > On May 30, 2015, at 4:00 PM, z_kamikimo wrote: > > Im experiencing issues with compiling Nginx on Windows 7, every thing goes > good until nmake -f objs/Makefile. > I get the following error > > Assembling: tmp32\sha1-586.asm > tmp32\sha1-586.asm(1432) : error A2070:invalid instruction operands > tmp32\sha1-586.asm(1576) : error A2070:invalid instruction operands > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\ml.EXE"' : return code '0x1' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > NMAKE : fatal error U1077: '"C:\Program Files\Microsoft Visual Studio > 10.0\VC\BI > N\nmake.exe"' : return code '0x2' > Stop. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,237923,259275#msg-259275 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From ahutchings at nginx.com Sun May 31 14:40:30 2015 From: ahutchings at nginx.com (Andrew Hutchings) Date: Sun, 31 May 2015 15:40:30 +0100 Subject: mail proxying In-Reply-To: References: Message-ID: <4BDE5196-31CC-4C6B-BBAB-DE54AB2DB89A@nginx.com> Hi, Unfortunately not with Nginx. You could, however, use stunnel on the backends to do this. Kind Regards Andrew > On 31 May 2015, at 00:58, dethegeek wrote: > > Hi > > I'm setting up nginx as a reverse proxy for a postfix / dovecot setup. > > My imap server requires STARTTLS usage. Nginx seems to not issue STARTTLS > command before forwarding users credentials. > > Here is the error I found in /var/log/nginx/error.log > > [error] 928#0: *20 upstream sent invalid response: "* BAD [ALERT] Plaintext > authentication not allowed without SSL/TLS, but your client did it anyway. > If anyone was listening, the password was exposed. > > I did not found anything in the documentation to ask nginx to issue STARTTLS > command to the upstream server. Is there a way to achieve this ? > > I did not tried pop3 yet, but I'm expecting the same annoyance. and the same > answer; let me know if I'm wrong. > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,259279,259279#msg-259279 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Senior Developer Advocate Nginx Inc. From viktor at szepe.net Sun May 31 20:43:43 2015 From: viktor at szepe.net (=?utf-8?b?U3rDqXBl?= Viktor) Date: Sun, 31 May 2015 22:43:43 +0200 Subject: Add header based on fastcgi response Message-ID: <20150531224343.Horde.nHRdEzxu77MPQsVAAny61g4@szepe.net> Good morning! I'd like to add a X-Fastcgi-Cache header when there is a fastcgi cache hit, when the response is stored to or retrieved from the cache. add_header X-Fastcgi-Cache 600; Could you help me? Thank you. Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let Sz?pe Viktor -- +36-20-4242498 sms at szepe.net skype: szepe.viktor Budapest, XX. ker?let