From nginx-forum at forum.nginx.org Sun Jan 1 08:45:02 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Sun, 01 Jan 2017 03:45:02 -0500 Subject: Naxsi Nginx High performance WAF In-Reply-To: <7e41bde8363de4e9f8b52e2c7c1916c6.NginxMailingListEnglish@forum.nginx.org> References: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> <7e41bde8363de4e9f8b52e2c7c1916c6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8d1b6cbf1025f44d84182cd3fec63e99.NginxMailingListEnglish@forum.nginx.org> mex Wrote: ------------------------------------------------------- > Hi c0nw0nk, > > mex here, inital creator of http://spike.nginx-goodies.com/rules/ > and maintainer of Doxi-Rules > https://bitbucket.org/lazy_dogtown/doxi-rules/overview > (this us where the rules live we create with spike :) > > the doxi-rules in its current state are inspired by emerging threats > rules, > and not by the CRS-System because: > > - mod_security can hook into any phase of a request, while naxsi only > works in access_phase > - naxsi has a very slim but yet powerfull core-ruleset > - naxsi doesnt hold state of an actor > > thus, it would not be possible to re-create the CRS onto naxsi, > instead, we > have a very slim but very fast core-ruleset that does not change very > often, > and ontop of this, if wanted a wider ruleset that protect against > common > classes of attacks like XXE or generel Object-Injections > http://spike.nginx-goodies.com/rules/view/42000341 > http://spike.nginx-goodies.com/rules/view/42000343 > > i learned from my gurus @emerging threats ti write signatures > against vulnerabilities, not exploits > > before naxsi i used mod_security with CRS as well and it was > more tha just PITA becaause of False Positives and performance-issues > as well. with naxsdi, learning mode and whitelist-creation > using a WAF is fun again. > > If you have detailed questions about naxsi, there is a > naxsi-discuss-mailinglist > as well > > > > > cheers, > > > mex > > > > > c0nw0nk Wrote: > ------------------------------------------------------- > > So I recently got hooked on Naxsi and I am loving it to bits <3 > thanks > > to itpp2012 :) > > > > https://github.com/nbs-system/naxsi > > > > I found the following Rule sets here. > > > > http://spike.nginx-goodies.com/rules/ > > > > But I am curious does anyone have Naxsi written rules that would be > > the same as/on Cloudflare's WAF ? > > > > These to be exact : > > Package: > > OWASP ModSecurity Core Rule Set : Covers OWASP Top 10 > vulnerabilities, > > and more. > > Package: > > Cloudflare Rule Set : Contains rules to stop attacks commonly seen > on > > Cloudflare's network and attacks against popular applications. > > > > > > Love to have a Naxsi version of their WAF rules to add in to the > > naxsi_core.rules file. Hey mex thats awesome :) I love your work too with spike. I have a question about this rule here. http://spike.nginx-goodies.com/rules/view/42000039 In the site list here http://spike.nginx-goodies.com/rules/ Why is that rule ID number completely "Greyed" out what does that mean ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271695,271790#msg-271790 From nginx-forum at forum.nginx.org Sun Jan 1 16:36:34 2017 From: nginx-forum at forum.nginx.org (scoobybri) Date: Sun, 01 Jan 2017 11:36:34 -0500 Subject: Requests Using Internet URL fail on LAN Access Message-ID: Greetings everyone! Happy New Year. I am a new Nginx user with a curious problem that I can not seem to fix. Here is my environment. (URLs and IPs have been changed to protect the innocent) ;-) I have a Debian/Nginx/MariaDB/PHP7/Nextcloud server running on my LAN on IP 192.168.1.20. I'm running https and the server is up and running properly. I can connect my secure Nextcloud website without a problem from my LAN. From the Internet, I have a registered URL that points at the WAN port of my router: server.blah.com. On the router, I have forwarded port 6767 to port 443 on my server. I can connect to the Nextcloud website on the server from both the LAN and Internet using the URL https://server.blah.com:6767. So far so good, right? Here is the problem. I use the caldav features of the Nextcloud server to sync calendar and contact data to my phone. I use the Davdroid client to connect to the caldav features of the server. The URL that is used to connect to the server for caldav discovery is: https://server.blah.com:6767/remote.php/dav/. Following the instructions from Nextcloud, I have two entries in my Nginx config file addressing caldav: location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/reemote.php/dav;mote.php/dav; } When I try to connect Davdroid to the Nextcloud website from my LAN using the URL ( https://server.blah.com:6767/remote.php/dav/), it fails. Here is the access log entries when I try to connect: (There are no entries in the error log.) 192.168.1.1 - bongo [01/Jan/2017:09:55:27 -0500] "PROPFIND /remote.php/dav/ HTTP/1.1" 207 854 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 192.168.1.1 - bongo [01/Jan/2017:09:55:58 -0500] "OPTIONS /remote.php/dav/principals/users/bongo/ HTTP/1.1" 200 0 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" BUT when I connect Davdroid to the Nextcloud website from the Internet, it works properly. Here are the access logs from when it works properly: 172.58.84.223 - bongo [01/Jan/2017:10:13:18 -0500] "PROPFIND /remote.php/dav/ HTTP/1.1" 207 854 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:18 -0500] "OPTIONS /remote.php/dav/principals/users/bongo/ HTTP/1.1" 200 0 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:19 -0500] "PROPFIND /remote.php/dav/ HTTP/1.1" 207 1658 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:20 -0500] "OPTIONS /remote.php/dav/principals/users/bongo/ HTTP/1.1" 200 0 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:20 -0500] "PROPFIND /remote.php/dav/principals/users/bongo/ HTTP/1.1" 207 630 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:27 -0500] "PROPFIND /remote.php/dav/principals/users/bongo/ HTTP/1.1" 207 738 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:27 -0500] "PROPFIND /remote.php/dav/principals/users/bongo/ HTTP/1.1" 207 790 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:27 -0500] "PROPFIND /remote.php/dav/principals/groups/admin/ HTTP/1.1" 207 652 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:27 -0500] "PROPFIND /remote.php/dav/principals/groups/admin/ HTTP/1.1" 207 723 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:28 -0500] "PROPFIND /remote.php/dav/addressbooks/users/bongo/ HTTP/1.1" 207 3220 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:28 -0500] "PROPFIND /remote.php/dav/calendars/bongo/ HTTP/1.1" 207 14245 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" 172.58.84.223 - bongo [01/Jan/2017:10:13:28 -0500] "PROPFIND /remote.php/dav/addressbooks/groups/admin/ HTTP/1.1" 404 11837 "-" "DAVdroid/1.3.5-gplay (2016/12/23; dav4android; okhttp3) Android/7.0" So I'm sure you are asking "Are caldav services working at all?" I can answer with a resounding "Yes!" How do I know? Well, I am using the Thunderbird email client on my LAN and connect to the caldav services for calendar sync and caldav is working fine. But here is another clue that might help. While Davdroid uses this URL to connect: https://server.blah.com:6767/remote.php/dav/ and then uses propfind to see available services, Thunderbird uses this URL to get to directly to specific calendars: https://server.blah.com:6767/remote.php/dav/calendars/bongo/recurring/ where "recurring" is the calendar name. Here is an example access log entry where Thunderbird connects successfully to sync via caldav: 192.168.1.1 - - [01/Jan/2017:10:31:30 -0500] "PROPFIND /remote.php/dav/calendars/bongo/main/ HTTP/1.1" 401 567 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 Lightning/4.7.4" 192.168.1.1 - - [01/Jan/2017:10:31:30 -0500] "PROPFIND /remote.php/dav/calendars/bongo/birthdays/ HTTP/1.1" 401 567 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 Lightning/4.7.4 192.168.1.1 - bongo [01/Jan/2017:10:31:59 -0500] "PROPFIND /remote.php/dav/calendars/bongo/main/ HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 Lightning/4.7.4" 192.168.1.1 - bongo [01/Jan/2017:10:31:59 -0500] "PROPFIND /remote.phate to say it, but when I ran the same exact setup but with Apachep/dav/calendars/bongo/birthdays/ HTTP/1.1" 499 0 "-" "Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.5.1 Lightning/4.7.4" I really think that this has something to do with the caldav directives in the Nginx config file since using https://server.blah.com:6767/remote.php/dav/calendars/bongo/recurring/ works on my LAN but https://server.blah.com:6767/remote.php/dav/ does not. I tried removing the caldav directives and restarting Nginx, but it did not fix the problem. I hate to say it, but when I ran the same exact setup but with Apache, caldav worked fine regardless of LAN, WAN, or client. I just rebuilt the server this week and decided to use Nginex this time since it is faster. It is indeed faster than Apache but if I can not get this problem fixed, I will have to go back to Apache. Please help me avoid that! ;-) Brian Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271791,271791#msg-271791 From nginx-forum at forum.nginx.org Sun Jan 1 16:40:08 2017 From: nginx-forum at forum.nginx.org (scoobybri) Date: Sun, 01 Jan 2017 11:40:08 -0500 Subject: Requests Using Internet URL fail on LAN Access In-Reply-To: References: Message-ID: Sorry but something when wonky when I cut and pasted the caldav directives from the Nginx config file. Here is the correct entries as they appear in my config file: location = /.well-known/carddav { return 301 $scheme://$host/remote.php/dav; } location = /.well-known/caldav { return 301 $scheme://$host/remote.php/dav; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271791,271792#msg-271792 From emailgrant at gmail.com Mon Jan 2 15:43:38 2017 From: emailgrant at gmail.com (Grant) Date: Mon, 2 Jan 2017 07:43:38 -0800 Subject: limit_req per subnet? In-Reply-To: <20161231103651.GQ2958@daoine.org> References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> <20161229111836.GO2958@daoine.org> <20161231103651.GQ2958@daoine.org> Message-ID: >> >> I'm looking for something that can >> >> be implemented independently of the backend, but that doesn't seem to >> >> exist in nginx. >> > >> > http://nginx.org/r/limit_req_zone >> > >> > You can define the "key" any way that you want. >> > >> > Perhaps you can create something using "geo". Perhaps you want "the first >> > three bytes of $binary_remote_addr". Perhaps you want "the remote ipv4 >> > address, rounded down to a multiple of 8". Perhaps you want something >> > else. >> >> >> So I'm sure I understand, none of the functionality described above >> exists currently? > > A variable with exactly the value that you want it to have, probably > does not exist currently in the stock nginx code. > > The code that allows you to create a variable with exactly the value > that you want it to have, probably does exist in the stock nginx code. > > You can use "geo", "map", "set", or (probably) any of the extension > languages to give the variable the value that you want it to have. > > For example: > > map $binary_remote_addr $bin_slash16 { > "~^(?P..)..$" "$a"; > } > > will probably come close to making $bin_slash16 hold a binary > representation of the first two octets of the connecting ip address. > > (You'll want to confirm whether "dot" matches "any byte" in your regex > engine; or whether you can make it match "any byte" (specifically > including the byte that normally represents newline); before you trust > that fully, of course.) That sounds like a good solution. Will using map along with a regex slow the server down much? - Grant From shahzaib.cb at gmail.com Tue Jan 3 13:10:45 2017 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Tue, 3 Jan 2017 18:10:45 +0500 Subject: Kqueue filled !! Message-ID: Hi, We're seeing following warnings in Nginx error logs : ============================================= kqueue change list is filled up while SSL handshaking ============================================= Should we be worried about these warnings ? Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From sca at andreasschulze.de Tue Jan 3 13:20:38 2017 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 03 Jan 2017 14:20:38 +0100 Subject: stream module on 100% cpu load Message-ID: <20170103142038.Horde.rAxhfHMCPMZ3D73rldqchQI@andreasschulze.de> Hello, last days I setup a server to encapsulate DNS over TLS. - DNS-Server @localhost, Port 53 TCP - NGINX Stream module on public IP, Port 853 TCP, SSL enabled. That work so far. Now I thought to scan this setup using ssllabs.com I shutdown my HTTPS webserver an let nginx stream module listen on port 443. To make it easier I switched also the stream proxy target to ::1, Port 80 Now I could again access my website but not via nginx ssl but nginx stream module. Work also so far... Now I pointed SSLlasbs to the server and ... surprise! The scan terminate with "Assessment failed: Unexpected failure" last loglines nginx wrote was: 2017/01/03 13:26:49 [info] 19253#0: *25 client [2600:c02:1020:4202::ac10:8267]:50918 connected to [2001:db8::53]:443 2017/01/03 13:26:49 [info] 19253#0: *25 proxy [2001:db8::53]:42534 connected to [::1]:80 2017/01/03 13:26:50 [notice] 19253#0: *25 SSL renegotiation disabled while proxying connection, client: 2600:c02:1020:4202::ac10:8267, server: [2001:db8::53]:443, upstream: "[::1]:80", bytes from/to client:138/0, bytes from/to upstream:0/138 The nginx process stop responding and eat up 100% cpu time. After reading again http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html I added "worker_processes auto;" to nginx.conf. That changed the picture a little bit. The ssllabs scan do no longer terminate but finish with a usual result. Still one nginx process consume 100% cpu time. I guess there is something broken with my setup or nginx. What further information are needed to nail down the problem? Andreas nginx-1.11.8 with this (simplified) /etc/nginx/nginx.conf: error_log /path/to/nginx-error.log info; daemon off; events { worker_connections 1024; } http { server { listen [::1]:80; location / { root /path/to/htdocs/; } } } worker_processes auto; stream { upstream dns { server [::1]:80; } server { listen [2001:db8::53]:443 ssl; proxy_pass dns; ssl_certificate /path/to/cert+intermediate.pem; ssl_certificate_key /path/to/key.pem; } } From francis at daoine.org Wed Jan 4 18:32:41 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 4 Jan 2017 18:32:41 +0000 Subject: limit_req per subnet? In-Reply-To: References: <9138eccb6a69bd21f80efded9d7640ae.NginxMailingListEnglish@forum.nginx.org> <20161229111836.GO2958@daoine.org> <20161231103651.GQ2958@daoine.org> Message-ID: <20170104183241.GR2958@daoine.org> On Mon, Jan 02, 2017 at 07:43:38AM -0800, Grant wrote: Hi there, > > For example: > > > > map $binary_remote_addr $bin_slash16 { > > "~^(?P..)..$" "$a"; > > } > > > > will probably come close to making $bin_slash16 hold a binary > > representation of the first two octets of the connecting ip address. > That sounds like a good solution. Will using map along with a regex > slow the server down much? The usual rule is that if you do not measure the slow-down on your test system, then there is not a significant slow-down for your use cases. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jan 5 09:52:24 2017 From: nginx-forum at forum.nginx.org (archon810) Date: Thu, 05 Jan 2017 04:52:24 -0500 Subject: Enable streaming proxy responses and nginx proxy caching at the same time? Message-ID: <5326b3bda61bcb732c778855f801fb87.NginxMailingListEnglish@forum.nginx.org> Hi, I'm using nginx for microcaching in front of apache, which really helps during busy days. But I've also really wanted to have responses from apache streamed. The only way to do that that I figured out so far is proxy_buffering off, but that disables any proxy caching. Is it possible to achieve this? I don't understand why nginx can't both cache the response and stream it at the same time. Thank you. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271834,271834#msg-271834 From nginx-forum at forum.nginx.org Thu Jan 5 10:34:03 2017 From: nginx-forum at forum.nginx.org (archon810) Date: Thu, 05 Jan 2017 05:34:03 -0500 Subject: Enable streaming proxy responses and nginx proxy caching at the same time? In-Reply-To: <5326b3bda61bcb732c778855f801fb87.NginxMailingListEnglish@forum.nginx.org> References: <5326b3bda61bcb732c778855f801fb87.NginxMailingListEnglish@forum.nginx.org> Message-ID: <40706c7ee4238bfbde28c2ce11958763.NginxMailingListEnglish@forum.nginx.org> I was able to work around this for my purposes by leaving proxy_buffering enabled and turning it off only for some locations, but it feels like a dirty hack. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271834,271835#msg-271835 From nginx-forum at forum.nginx.org Fri Jan 6 05:40:55 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Fri, 06 Jan 2017 00:40:55 -0500 Subject: nginx cache mounted on tmpf getting fulled Message-ID: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> Hi, I am using nginx as webserver with nginx version: nginx/1.10.2. For faster access we have mounted cache of nginx of different application on RAM.But even after giving enough buffer of size , now and then cache is getting filled , below are few details of files for your reference : maximum size given in nginx conf file is 500G , while mouting we have given 600G of space i.e. 100G of buffer.But still it is getting filled 100%. fstab entries : tmpfs /cache/123 tmpfs defaults,size=600G 0 0 tmpfs /cache/456 tmpfs defaults,size=60G 0 0 tmpfs /cache/789 tmpfs defaults,size=110G 0 0 cache getting filled , df output: tmpfs tmpfs 60G 17G 44G 28% /cache/456 tmpfs tmpfs 110G 323M 110G 1% /cache/789 tmpfs tmpfs 600G 600G 0 100% /cache/123 nginx conf details : proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g inactive=3d; server{ listen 80; server_name dvr.catchup.com; location ~.*.m3u8 { access_log /var/log/nginx/access_123.log access; proxy_cache off; root /xyz/123; if (!-e $request_filename) { #origin url will be used if content is not available on DS proxy_pass http://10.10.10.1X; } } location / { access_log /var/log/nginx/access_123.log access; proxy_cache_valid 3d; proxy_cache a123; root /xyz/123; if (!-e $request_filename) { #origin url will be used if content is not available on server proxy_pass http://10.10.10.1X; } proxy_cache_key $proxy_host$uri; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271842#msg-271842 From nginx-forum at forum.nginx.org Fri Jan 6 05:41:47 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Fri, 06 Jan 2017 00:41:47 -0500 Subject: nginx cache mounted on tmpf getting filled 100% In-Reply-To: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8bd2e65163a0213e4c2e7ab9e010034a.NginxMailingListEnglish@forum.nginx.org> omkar_jadhav_20 Wrote: ------------------------------------------------------- > Hi, > > I am using nginx as webserver with nginx version: nginx/1.10.2. For > faster access we have mounted cache of nginx of different application > on RAM.But even after giving enough buffer of size , now and then > cache is getting filled , below are few details of files for your > reference : > maximum size given in nginx conf file is 500G , while mouting we have > given 600G of space i.e. 100G of buffer.But still it is getting filled > 100%. > > fstab entries : > tmpfs /cache/123 tmpfs > defaults,size=600G 0 0 > tmpfs /cache/456 tmpfs > defaults,size=60G 0 0 > tmpfs /cache/789 tmpfs > defaults,size=110G 0 0 > > cache getting filled , df output: > > tmpfs tmpfs 60G 17G 44G 28% > /cache/456 > tmpfs tmpfs 110G 323M 110G 1% > /cache/789 > tmpfs tmpfs 600G 600G 0 100% > /cache/123 > > nginx conf details : > > proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 > max_size=500g inactive=3d; > > server{ > listen 80; > server_name dvr.catchup.com; > location ~.*.m3u8 { > access_log /var/log/nginx/access_123.log access; > proxy_cache off; > root /xyz/123; > if (!-e $request_filename) { > #origin url will be used if content is not available on DS > proxy_pass http://10.10.10.1X; > } > } > location / { > access_log /var/log/nginx/access_123.log access; > proxy_cache_valid 3d; > proxy_cache a123; > root /xyz/123; > if (!-e $request_filename) { > #origin url will be used if content is not available on server > proxy_pass http://10.10.10.1X; > } > proxy_cache_key $proxy_host$uri; > } > } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271843#msg-271843 From nginx-forum at forum.nginx.org Fri Jan 6 09:11:02 2017 From: nginx-forum at forum.nginx.org (mex) Date: Fri, 06 Jan 2017 04:11:02 -0500 Subject: Naxsi Nginx High performance WAF In-Reply-To: <8d1b6cbf1025f44d84182cd3fec63e99.NginxMailingListEnglish@forum.nginx.org> References: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> <7e41bde8363de4e9f8b52e2c7c1916c6.NginxMailingListEnglish@forum.nginx.org> <8d1b6cbf1025f44d84182cd3fec63e99.NginxMailingListEnglish@forum.nginx.org> Message-ID: <38dffaf885b1e005f59685ddcf6b153e.NginxMailingListEnglish@forum.nginx.org> grey rules means they are deactivated i'm gonna write a blog on how we use spike + doxi-rules in our setup, but it will take some time. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271695,271844#msg-271844 From nginx-forum at forum.nginx.org Fri Jan 6 09:29:04 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Fri, 06 Jan 2017 04:29:04 -0500 Subject: Naxsi Nginx High performance WAF In-Reply-To: <38dffaf885b1e005f59685ddcf6b153e.NginxMailingListEnglish@forum.nginx.org> References: <23c7cbddee8dfe104cf02dd737b866ac.NginxMailingListEnglish@forum.nginx.org> <7e41bde8363de4e9f8b52e2c7c1916c6.NginxMailingListEnglish@forum.nginx.org> <8d1b6cbf1025f44d84182cd3fec63e99.NginxMailingListEnglish@forum.nginx.org> <38dffaf885b1e005f59685ddcf6b153e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <907240011ba624460ce356d2e24cde6f.NginxMailingListEnglish@forum.nginx.org> mex Wrote: ------------------------------------------------------- > grey rules means they are deactivated > > > i'm gonna write a blog on how we use spike + doxi-rules in our > setup, but it will take some time. That's cool look forward to it also the rules on spike I think need updating with the bitbucket page since the rules are the same but allot on the bitbucket changed to now be case insensitive matches. As you see here : https://bitbucket.org/lazy_dogtown/doxi-rules/commits/e00016cc8bf7bb93c44afaf78fdd9b279290adcb#Lscanner.rulesT18 All lower case. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271695,271845#msg-271845 From nginx-forum at forum.nginx.org Fri Jan 6 15:24:06 2017 From: nginx-forum at forum.nginx.org (MrFastDie) Date: Fri, 06 Jan 2017 10:24:06 -0500 Subject: Performance test caps at 600 Mbit/s Message-ID: Hello, the last days I played a little with the NGINX settings and the tcp stack to test the best performance. I used direct connection between my testing machine and my server using a cat5e cable. My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc My sysctl.conf also provides lot of changes: http://pastebin.com/KmPjEnHN My /etc/security/limits.conf provides soft nofile at 1048576 and hard nofile at the same value. I'm not using any kind of firewall, all logs are disabled, neither the cpu nor the ram is at it's limit, the i/o load seems fine to me too, but the internet speed caps round about 600 Mbit/s. This test was made with 2000 connections per time and a 1MB file. The server contains an Intel(R) Pentium(R) Dual CPU E2220 @ 2.40GHz and has 3.5G of Ram. My local computer contains an Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz and has 16G of Ram. The limits and tcp changes were also made on my local computer. Is there someone who can help me to deal with this problem? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271846,271846#msg-271846 From lenaigst at maelenn.org Fri Jan 6 15:28:59 2017 From: lenaigst at maelenn.org (Thierry) Date: Fri, 6 Jan 2017 17:28:59 +0200 Subject: upstream prematurely closed connection Message-ID: <1986304318.20170106172859@maelenn.org> Dear all, The error message: 2017/01/05 18:28:49 [error] 32633#32633: *3 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: server_hostname.domain.ltd, request: "POST /SOGo/connect HTTP/2.0", upstream: "https://ip_server:port_number/SOGo/connect", host: "server_hostname.domain.ltd:port_number", referrer: "https://server_hostname.domain.ltd:port_number/SOGo/" 2017/01/05 18:28:49 [error] 32633#32633: *3 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: server_hostname.domain.ltd, request: "GET /SOGo.woa/WebServerResources/busy.gif HTTP/2.0", upstream: "https://ip_server:port_number/SOGo.woa/WebServerResources/busy.gif", host: "server_hostname.domain.ltd:port_number", referrer: "https://server_hostname.domain.ltd:port_number/SOGo/" I have a front-end reverse proxy using Nginx for my web server and for the web interface of my email server using Sogo. From my firewall: - If I NAT the "Sogo" port to my reverse-proxy I do have these error and a 502 Bad Gateway message. - If I bypass my reverse-proxy and NAT directly to my email server, this is working. On my reverse-proxy server, I am using Nginx version 1.9.10 (debian 8) On my email server, I am using Nginx version 1.6.2 (debian 8) If you need more information, please ask ... I do not understand this error myself. ps: it is the second time I am sending this email (?) -- Cordialement, Thierry e-mail : lenaigst at maelenn.org PGP Key: 0xB7E3B9CD From zxcvbn4038 at gmail.com Fri Jan 6 16:25:47 2017 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Fri, 6 Jan 2017 11:25:47 -0500 Subject: Performance test caps at 600 Mbit/s In-Reply-To: References: Message-ID: Which OS? What NIC? You also have to consider the traffic source, is it known capable of of saturating the NIC on your server? On Fri, Jan 6, 2017 at 10:24 AM, MrFastDie wrote: > Hello, > > the last days I played a little with the NGINX settings and the tcp stack > to > test the best performance. I used direct connection between my testing > machine and my server using a cat5e cable. > My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc > My sysctl.conf also provides lot of changes: http://pastebin.com/KmPjEnHN > My /etc/security/limits.conf provides soft nofile at 1048576 and hard > nofile > at the same value. > > I'm not using any kind of firewall, all logs are disabled, neither the cpu > nor the ram is at it's limit, the i/o load seems fine to me too, but the > internet speed caps round about 600 Mbit/s. This test was made with 2000 > connections per time and a 1MB file. > > The server contains an Intel(R) Pentium(R) Dual CPU E2220 @ 2.40GHz and > has 3.5G of Ram. > My local computer contains an Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz and > has 16G of Ram. > > The limits and tcp changes were also made on my local computer. > > Is there someone who can help me to deal with this problem? > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271846,271846#msg-271846 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Sat Jan 7 19:26:48 2017 From: peter_booth at me.com (Peter Booth) Date: Sat, 07 Jan 2017 14:26:48 -0500 Subject: performance test caps at 600Mbit/s [nginx Digest, Vol 87, Issue 7 In-Reply-To: References: Message-ID: <7E1B8FBB-2CA3-4EDA-8581-DD30265CF77F@me.com> You said that your test case peaks at 600Mbit/sec. Your first step should be to bisect the problem, to see if you're limited by your hardware+OS or your test + nginx configuration. Easiest way is to install solarflare's free network test utility from the support section of their website. After that, to dig further into web specific factors it can be worth install the (large) Tech Empower web framework test bed. This is a set of five or six micro benchmarks implemented in over 80 web frameworks. You can compare your results with their published results. Peter Sent from my iPhone > On Jan 7, 2017, at 7:00 AM, nginx-request at nginx.org wrote: > > Send nginx mailing list submissions to > nginx at nginx.org > > To subscribe or unsubscribe via the World Wide Web, visit > http://mailman.nginx.org/mailman/listinfo/nginx > or, via email, send a message with subject or body 'help' to > nginx-request at nginx.org > > You can reach the person managing the list at > nginx-owner at nginx.org > > When replying, please edit your Subject line so it is more specific > than "Re: Contents of nginx digest..." > > > Today's Topics: > > 1. Performance test caps at 600 Mbit/s (MrFastDie) > 2. upstream prematurely closed connection (Thierry) > 3. Re: Performance test caps at 600 Mbit/s (CJ Ess) > > > ---------------------------------------------------------------------- > > Message: 1 > Date: Fri, 06 Jan 2017 10:24:06 -0500 > From: "MrFastDie" > To: nginx at nginx.org > Subject: Performance test caps at 600 Mbit/s > Message-ID: > > > Content-Type: text/plain; charset=UTF-8 > > Hello, > > the last days I played a little with the NGINX settings and the tcp stack to > test the best performance. I used direct connection between my testing > machine and my server using a cat5e cable. > My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc > My sysctl.conf also provides lot of changes: http://pastebin.com/KmPjEnHN > My /etc/security/limits.conf provides soft nofile at 1048576 and hard nofile > at the same value. > > I'm not using any kind of firewall, all logs are disabled, neither the cpu > nor the ram is at it's limit, the i/o load seems fine to me too, but the > internet speed caps round about 600 Mbit/s. This test was made with 2000 > connections per time and a 1MB file. > > The server contains an Intel(R) Pentium(R) Dual CPU E2220 @ 2.40GHz and > has 3.5G of Ram. > My local computer contains an Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz and > has 16G of Ram. > > The limits and tcp changes were also made on my local computer. > > Is there someone who can help me to deal with this problem? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271846,271846#msg-271846 > > > > ------------------------------ > > Message: 2 > Date: Fri, 6 Jan 2017 17:28:59 +0200 > From: Thierry > To: nginx at nginx.org > Subject: upstream prematurely closed connection > Message-ID: <1986304318.20170106172859 at maelenn.org> > Content-Type: text/plain; charset=us-ascii > > Dear all, > > The error message: > 2017/01/05 18:28:49 [error] 32633#32633: *3 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: server_hostname.domain.ltd, request: "POST /SOGo/connect HTTP/2.0", upstream: "https://ip_server:port_number/SOGo/connect", host: "server_hostname.domain.ltd:port_number", referrer: "https://server_hostname.domain.ltd:port_number/SOGo/" > 2017/01/05 18:28:49 [error] 32633#32633: *3 upstream prematurely closed connection while reading response header from upstream, client: xxx.xxx.xxx.xxx, server: server_hostname.domain.ltd, request: "GET /SOGo.woa/WebServerResources/busy.gif HTTP/2.0", upstream: "https://ip_server:port_number/SOGo.woa/WebServerResources/busy.gif", host: "server_hostname.domain.ltd:port_number", referrer: "https://server_hostname.domain.ltd:port_number/SOGo/" > > I have a front-end reverse proxy using Nginx for my web server and > for the web interface of my email server using Sogo. > > From my firewall: > - If I NAT the "Sogo" port to my reverse-proxy I do > have these error and a 502 Bad Gateway message. > - If I bypass my reverse-proxy and NAT directly to my email server, > this is working. > > On my reverse-proxy server, I am using Nginx version 1.9.10 (debian 8) > On my email server, I am using Nginx version 1.6.2 (debian 8) > > If you need more information, please ask ... I do not understand > this error myself. > > ps: it is the second time I am sending this email (?) > > > -- > Cordialement, > Thierry e-mail : lenaigst at maelenn.org > PGP Key: 0xB7E3B9CD > > > > ------------------------------ > > Message: 3 > Date: Fri, 6 Jan 2017 11:25:47 -0500 > From: CJ Ess > To: nginx at nginx.org > Subject: Re: Performance test caps at 600 Mbit/s > Message-ID: > > Content-Type: text/plain; charset="utf-8" > > Which OS? What NIC? You also have to consider the traffic source, is it > known capable of of saturating the NIC on your server? > > On Fri, Jan 6, 2017 at 10:24 AM, MrFastDie > wrote: > >> Hello, >> >> the last days I played a little with the NGINX settings and the tcp stack >> to >> test the best performance. I used direct connection between my testing >> machine and my server using a cat5e cable. >> My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc >> My sysctl.conf also provides lot of changes: http://pastebin.com/KmPjEnHN >> My /etc/security/limits.conf provides soft nofile at 1048576 and hard >> nofile >> at the same value. >> >> I'm not using any kind of firewall, all logs are disabled, neither the cpu >> nor the ram is at it's limit, the i/o load seems fine to me too, but the >> internet speed caps round about 600 Mbit/s. This test was made with 2000 >> connections per time and a 1MB file. >> >> The server contains an Intel(R) Pentium(R) Dual CPU E2220 @ 2.40GHz and >> has 3.5G of Ram. >> My local computer contains an Intel(R) Core(TM) i7-4710HQ CPU @ 2.50GHz and >> has 16G of Ram. >> >> The limits and tcp changes were also made on my local computer. >> >> Is there someone who can help me to deal with this problem? >> >> Posted at Nginx Forum: https://forum.nginx.org/read. >> php?2,271846,271846#msg-271846 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- > An HTML attachment was scrubbed... > URL: > > ------------------------------ > > Subject: Digest Footer > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > ------------------------------ > > End of nginx Digest, Vol 87, Issue 7 > ************************************ From nginx-forum at forum.nginx.org Sun Jan 8 07:37:19 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Sun, 08 Jan 2017 02:37:19 -0500 Subject: upstream prematurely closed connection In-Reply-To: <1986304318.20170106172859@maelenn.org> References: <1986304318.20170106172859@maelenn.org> Message-ID: I have done some midification: On my email server, I have upgrading my Nginx from 1.6 to 1.9. I have add: http2 Both Nginx (email server and reverse proxy) have now the same Nginx version. But still have the same problem. Thx for your support. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271847,271858#msg-271858 From ruz at sports.ru Sun Jan 8 14:04:47 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Sun, 8 Jan 2017 17:04:47 +0300 Subject: upstream timeouts I can not explain Message-ID: Hello, nginx 1.10.2 [1] running on FreeBSD 10.3-RELEASE-p7 We have quite a lot of "upstream timed out" errors. For example: 2017/01/08 16:20:54 [error] 82154#0: *494223064 upstream timed out (60: Operation timed out) while connecting to upstream, client: 192.168.1.43, server: q0.sports.ru, request: "POST /pub?topic=docs HTTP/1.1", upstream: " http://192.168.1.206:4151/pub?topic=docs", host: "q0.sports.ru" Here is tcpdump: 16:20:53.218916 IP myself-fe.28957 > q0-2.4151: Flags [S], seq 2479999411, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 787079162 ecr 0], length 0 16:20:53.219721 IP q0-2.4151 > myself-fe.28957: Flags [S.], seq 605231398, ack 2479999412, win 28960, options [mss 1460,sackOK,TS val 352522447 ecr 787079162,nop,wscale 7], length 0 16:20:53.219742 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 1, win 1040, options [nop,nop,TS val 787079163 ecr 352522447], length 0 16:20:54.572712 IP myself-fe.28957 > q0-2.4151: Flags [F.], seq 1, ack 1, win 1040, options [nop,nop,TS val 787080516 ecr 352522447], length 0 16:20:54.573173 IP q0-2.4151 > myself-fe.28957: Flags [F.], seq 1, ack 2, win 227, options [nop,nop,TS val 352522786 ecr 787080516], length 0 16:20:54.573195 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 2, win 1040, options [nop,nop,TS val 787080516 ecr 352522786], length 0 What I don't see here is a packet with HTTP request. I checked access log and only two requests to .206 upstream were finished at "16:20:54". Another request looked like this: 16:20:54.026151 IP myself-fe.29450 > q0-2.4151: Flags [S], seq 347540799, win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 787079969 ecr 0], length 0 16:20:54.026502 IP q0-2.4151 > myself-fe.29450: Flags [S.], seq 2176380567, ack 347540800, win 28960, options [mss 1460,sackOK,TS val 352522649 ecr 787079969,nop,wscale 7], length 0 16:20:54.026526 IP myself-fe.29450 > q0-2.4151: Flags [.], ack 1, win 1040, options [nop,nop,TS val 787079970 ecr 352522649], length 0 16:20:54.026537 IP myself-fe.29450 > q0-2.4151: Flags [P.], seq 1:205, ack 1, win 1040, options [nop,nop,TS val 787079970 ecr 352522649], length 204 16:20:54.026779 IP q0-2.4151 > myself-fe.29450: Flags [.], ack 205, win 235, options [nop,nop,TS val 352522649 ecr 787079970], length 0 16:20:54.027136 IP q0-2.4151 > myself-fe.29450: Flags [P.], seq 1:119, ack 205, win 235, options [nop,nop,TS val 352522649 ecr 787079970], length 118 16:20:54.027145 IP q0-2.4151 > myself-fe.29450: Flags [F.], seq 119, ack 205, win 235, options [nop,nop,TS val 352522649 ecr 787079970], length 0 16:20:54.027156 IP myself-fe.29450 > q0-2.4151: Flags [.], ack 120, win 1038, options [nop,nop,TS val 787079970 ecr 352522649], length 0 16:20:54.027179 IP myself-fe.29450 > q0-2.4151: Flags [F.], seq 205, ack 120, win 1040, options [nop,nop,TS val 787079970 ecr 352522649], length 0 16:20:54.027354 IP q0-2.4151 > myself-fe.29450: Flags [.], ack 206, win 235, options [nop,nop,TS val 352522649 ecr 787079970], length 0 It has packet with HTTP request after establishing TCP connection. Any idea where I should look next? Is it some known issue? [1] nginx -V nginx version: nginx/1.10.2 built with OpenSSL 1.0.2j 26 Sep 2016 TLS SNI support enabled configure arguments: --prefix=/usr/local/etc/nginx --with-cc-opt='-I /usr/local/include' --with-ld-opt='-L /usr/local/lib' --conf-path=/usr/local/etc/nginx/nginx.conf --sbin-path=/usr/local/sbin/nginx --pid-path=/var/run/nginx.pid --error-log-path=/var/log/nginx/error.log --user=www --group=www --with-file-aio --http-client-body-temp-path=/var/tmp/nginx/client_body_temp --http-fastcgi-temp-path=/var/tmp/nginx/fastcgi_temp --http-proxy-temp-path=/var/tmp/nginx/proxy_temp --http-scgi-temp-path=/var/tmp/nginx/scgi_temp --http-uwsgi-temp-path=/var/tmp/nginx/uwsgi_temp --http-log-path=/var/log/nginx/access.log --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_realip_module --with-http_slice_module --with-http_stub_status_module --with-http_sub_module --with-pcre --with-http_v2_module --with-stream --with-stream_ssl_module --with-http_ssl_module -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From he.hailong5 at zte.com.cn Mon Jan 9 03:01:40 2017 From: he.hailong5 at zte.com.cn (he.hailong5 at zte.com.cn) Date: Mon, 9 Jan 2017 11:01:40 +0800 (CST) Subject: having nginx listen the same port more than once Message-ID: <201701091101404263841@zte.com.cn> SGksDQoNCg0KDQoNCg0KSSBvYnNlcnZlIHRoYXQgdGhlIG5naW54IHJ1bnMgd2l0aCBubyBlcnJv ciBpZiB0aGVyZSBhcmUgZHVwbGljYXRlIGxpc3RlbiBwb3J0cyBjb25maWd1cmVkIGluIHRoZSBo dHRwIHNlcnZlciBibG9jayBvciBzdHJlYW0gc2VydmVyIGJsb2NrLg0KDQppcyB0aGlzIGJlaGF2 aW9yIGFzIGV4cGVjdGVkPyBhbmQgaWYgYSByZXF1ZXN0IGNvbWVzIGF0IHN1Y2ggYSBwb3J0LCB3 aGljaCBzZXJ2ZXIgd291bGQgc2VydmUgdGhpcyByZXF1ZXN0LCBieSByYWRvbWx5IG9yIHJvdW5k LXJvYmluPw0KDQoNCg0KDQpUaGFua3MsDQoNCkpvZQ== -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 9 07:23:08 2017 From: nginx-forum at forum.nginx.org (pavelvasev) Date: Mon, 09 Jan 2017 02:23:08 -0500 Subject: X-Accel-Redirect in LUA? In-Reply-To: References: Message-ID: <4099022a4f34b6113a97ce37b955f2d7.NginxMailingListEnglish@forum.nginx.org> Have you found a solution for this, Richard? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,228856,271867#msg-271867 From nginx-forum at forum.nginx.org Mon Jan 9 08:31:30 2017 From: nginx-forum at forum.nginx.org (MrFastDie) Date: Mon, 09 Jan 2017 03:31:30 -0500 Subject: Performance test caps at 600 Mbit/s In-Reply-To: References: Message-ID: <6310d40dd6547aa03a25e4b78d63e1ce.NginxMailingListEnglish@forum.nginx.org> The OS of the server is debian 8, at my testing machine i'm using arch linux. Both NIC supports the speed of 1000Mb/s the server got round about 600 Mb/s up and 13Mb/s down. CJ Ess Wrote: ------------------------------------------------------- > Which OS? What NIC? You also have to consider the traffic source, is > it > known capable of of saturating the NIC on your server? > > On Fri, Jan 6, 2017 at 10:24 AM, MrFastDie > > wrote: > > > Hello, > > > > the last days I played a little with the NGINX settings and the tcp > stack > > to > > test the best performance. I used direct connection between my > testing > > machine and my server using a cat5e cable. > > My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc > > My sysctl.conf also provides lot of changes: > http://pastebin.com/KmPjEnHN > > My /etc/security/limits.conf provides soft nofile at 1048576 and > hard > > nofile > > at the same value. > > > > I'm not using any kind of firewall, all logs are disabled, neither > the cpu > > nor the ram is at it's limit, the i/o load seems fine to me too, but > the > > internet speed caps round about 600 Mbit/s. This test was made with > 2000 > > connections per time and a 1MB file. > > > > The server contains an Intel(R) Pentium(R) Dual CPU E2220 @ > 2.40GHz and > > has 3.5G of Ram. > > My local computer contains an Intel(R) Core(TM) i7-4710HQ CPU @ > 2.50GHz and > > has 16G of Ram. > > > > The limits and tcp changes were also made on my local computer. > > > > Is there someone who can help me to deal with this problem? > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > php?2,271846,271846#msg-271846 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271846,271869#msg-271869 From nginx-forum at forum.nginx.org Mon Jan 9 09:05:45 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Mon, 09 Jan 2017 04:05:45 -0500 Subject: nginx cache mounted on tmpf getting filled 100% In-Reply-To: <8bd2e65163a0213e4c2e7ab9e010034a.NginxMailingListEnglish@forum.nginx.org> References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> <8bd2e65163a0213e4c2e7ab9e010034a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <276f0f78c4df06af0545ec45eca8db61.NginxMailingListEnglish@forum.nginx.org> can someone please respond. Let me know if any additional information is required. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271870#msg-271870 From M.Slowe at kent.ac.uk Mon Jan 9 09:10:10 2017 From: M.Slowe at kent.ac.uk (Matthew Slowe) Date: Mon, 9 Jan 2017 09:10:10 +0000 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <72d4ae08-54df-606a-c045-ff4b88dad3c6@kent.ac.uk> -----BEGIN PGP SIGNED MESSAGE----- Hash: SHA256 On 06/01/2017 05:40, omkar_jadhav_20 wrote: > Hi, > > I am using nginx as webserver with nginx version: nginx/1.10.2. For > faster access we have mounted cache of nginx of different > application on RAM.But even after giving enough buffer of size , > now and then cache is getting filled , below are few details of > files for your reference : maximum size given in nginx conf file is > 500G , while mouting we have given 600G of space i.e. 100G of > buffer.But still it is getting filled 100%. Do you actually have enough RAM / swap to facilitate these requirements? It looks like you'd need about 800G of RAM/swap space to make this work? If you do then I don't know enough about how nginx works to advise, sorry :-) - -- Matthew Slowe | Server Infrastructure Officer IT Infrastructure, Information Services, University of Kent Room S21, Cornwallis South Canterbury, Kent, CT2 7NZ, UK Tel: +44 (0)1227 824265 www.kent.ac.uk/is | @UnikentUnseenIT | @UKCLibraryIt PGP: https://keybase.io/fooflington -----BEGIN PGP SIGNATURE----- Version: GnuPG/MacGPG2 v2.0 iQIcBAEBCAAGBQJYc1NyAAoJEGc7JS6kpZi5IqcP/jjt31iEpOIc9iKa3kd/pE8i uXa1QIkm2Pw6sU23SYu2tlMo2ExlhG8gCIYHs8nEh0uZvYX/uEaRbCSLXlkdzPBH RZw3ngv48PhxFZqPWJXWZyDEElE8ybPtf5/ST5VA1Y/QMMwT8sFy3yTMbCD55MV9 RA3SIDFByMAzeMgHaQBErJzFW5UU+NenlGKZjQRZ6upxy1D0YGk1fV5HsI2h4MRx L9i1afpbbmPE1p6v5q8RYEXxcQsTieRwsV0BSCBlfzcy/VoAC55p9sGy1wFlk4bD b5XYUi7kX3/absk2vxHLGPJLOSMwD77BAWt3SzYb8RlPwEFvQc4JNED4uNykR51I mUdAE1F+rxzHMZStoggkeSLKqoRleyJ5ZKRrP6gq5jVEl0m3cWo7pljM5By/eLNR lyPZkbfIHWQvyRN8pnLVzvYMcrXXjONMOdJCngSppb6Ae1cSC0esGvLXxRIYzySF sgNfB04H3R66hq3uLGz7rYrPKaAgEKg3XaASso4r3uYYWUvfYmAO5bE8pkZ+M+QW Lx5vbyNZgPFVNSihZBnryh2WUs1c4LjHYEOoUYSTKirxV6daQiOoPrYfBXXUMwk4 FaBkimO3gu7xMmoe2lOc3KGZuEz2JV1Sqo/orxQ4Pmk5lkgVfSrPDn3h/oL8/ZOW Zdlw+tZNDm0ULOGlS2tE =YUuq -----END PGP SIGNATURE----- From richard at kearsley.me Mon Jan 9 09:14:45 2017 From: richard at kearsley.me (Richard Kearsley) Date: Mon, 9 Jan 2017 09:14:45 +0000 Subject: X-Accel-Redirect in LUA? In-Reply-To: <4099022a4f34b6113a97ce37b955f2d7.NginxMailingListEnglish@forum.nginx.org> References: <4099022a4f34b6113a97ce37b955f2d7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <58735485.9000607@kearsley.me> Hi Yes, use ngx.exec in lua On 09/01/17 07:23, pavelvasev wrote: > Have you found a solution for this, Richard? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,228856,271867#msg-271867 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 9 09:17:19 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Mon, 09 Jan 2017 04:17:19 -0500 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <72d4ae08-54df-606a-c045-ff4b88dad3c6@kent.ac.uk> References: <72d4ae08-54df-606a-c045-ff4b88dad3c6@kent.ac.uk> Message-ID: <3ffc3c3ed52e80136ef96945bcb16f43.NginxMailingListEnglish@forum.nginx.org> Yes I do have RAM of size : 1.5T and swap space of around 200G. It has been observed that swap is not getting used in this case. But it seems either OS is not clearing the RAM that fast or nginx is not able to control RAM as mentioned in the nginx configuration file. Could you please suggest what can be the possible solution for this issue. Every time max_size is getting crossed by RAM even after mentioning it in nginx config. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271873#msg-271873 From nginx-forum at forum.nginx.org Mon Jan 9 09:56:45 2017 From: nginx-forum at forum.nginx.org (pavelvasev) Date: Mon, 09 Jan 2017 04:56:45 -0500 Subject: X-Accel-Redirect in LUA? In-Reply-To: <58735485.9000607@kearsley.me> References: <58735485.9000607@kearsley.me> Message-ID: <0e537448cf6f3a5efa94060c632a9770.NginxMailingListEnglish@forum.nginx.org> Thank you, Richard! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,228856,271874#msg-271874 From nginx-forum at forum.nginx.org Mon Jan 9 13:04:41 2017 From: nginx-forum at forum.nginx.org (scoobybri) Date: Mon, 09 Jan 2017 08:04:41 -0500 Subject: Requests Using Internet URL fail on LAN Access In-Reply-To: References: Message-ID: <68ffd67d4240a5e3e9bb2722c4c1b32b.NginxMailingListEnglish@forum.nginx.org> I could never find an answer to this problem so I just reinstalled Nextcloud using Apache instead on Nginx and it works flawlessly. Not sure what could have caused Nginx to not work properly in this application but after trying for a week and not getting an answer from the Nginx community, I had to go with what I knew worked....Apache. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271791,271876#msg-271876 From vbart at nginx.com Mon Jan 9 14:44:20 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 09 Jan 2017 17:44:20 +0300 Subject: Performance test caps at 600 Mbit/s In-Reply-To: References: Message-ID: <87294562.QpLn3KW2WQ@vbart-laptop> On Friday 06 January 2017 10:24:06 MrFastDie wrote: > Hello, > > the last days I played a little with the NGINX settings and the tcp stack to > test the best performance. I used direct connection between my testing > machine and my server using a cat5e cable. > My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc > My sysctl.conf also provides lot of changes: http://pastebin.com/KmPjEnHN > My /etc/security/limits.conf provides soft nofile at 1048576 and hard nofile > at the same value. > [..] You should configure worker_processes to use all cpu cores and disable accept_mutex. You can find more information in this talk: https://www.youtube.com/watch?v=eLW_NSuwYU0 wbr, Valentin V. Bartenev From mdounin at mdounin.ru Mon Jan 9 15:14:19 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Jan 2017 18:14:19 +0300 Subject: upstream timeouts I can not explain In-Reply-To: References: Message-ID: <20170109151419.GA1761@mdounin.ru> Hello! On Sun, Jan 08, 2017 at 05:04:47PM +0300, ?????? ??????? wrote: > Hello, > > nginx 1.10.2 [1] running on FreeBSD 10.3-RELEASE-p7 > > We have quite a lot of "upstream timed out" errors. For example: > > 2017/01/08 16:20:54 [error] 82154#0: *494223064 upstream timed out (60: > Operation timed out) while connecting to upstream, client: 192.168.1.43, > server: q0.sports.ru, request: "POST /pub?topic=docs HTTP/1.1", upstream: " > http://192.168.1.206:4151/pub?topic=docs", host: "q0.sports.ru" > Here is tcpdump: > > 16:20:53.218916 IP myself-fe.28957 > q0-2.4151: Flags [S], seq 2479999411, > win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 787079162 ecr 0], > length 0 > 16:20:53.219721 IP q0-2.4151 > myself-fe.28957: Flags [S.], seq 605231398, > ack 2479999412, win 28960, options [mss 1460,sackOK,TS val 352522447 ecr > 787079162,nop,wscale 7], length 0 > 16:20:53.219742 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 1, win 1040, > options [nop,nop,TS val 787079163 ecr 352522447], length 0 > 16:20:54.572712 IP myself-fe.28957 > q0-2.4151: Flags [F.], seq 1, ack 1, > win 1040, options [nop,nop,TS val 787080516 ecr 352522447], length 0 > 16:20:54.573173 IP q0-2.4151 > myself-fe.28957: Flags [F.], seq 1, ack 2, > win 227, options [nop,nop,TS val 352522786 ecr 787080516], length 0 > 16:20:54.573195 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 2, win 1040, > options [nop,nop,TS val 787080516 ecr 352522786], length 0 > > What I don't see here is a packet with HTTP request. The "upstream timeout ... while connecting to upstream" suggests that nginx wasn't able to see the connect event. [...] > Any idea where I should look next? Some things to consider: - Make sure you are looking at tcpdump on the nginx host, and there are no firewalls on the host to interfere with. - Try collecting nginx debug logs, they should have enough information to see if the event was reported by the kernel or not. > Is it some known issue? No. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jan 9 15:38:59 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Jan 2017 18:38:59 +0300 Subject: having nginx listen the same port more than once In-Reply-To: <201701091101404263841@zte.com.cn> References: <201701091101404263841@zte.com.cn> Message-ID: <20170109153859.GC1761@mdounin.ru> Hello! On Mon, Jan 09, 2017 at 11:01:40AM +0800, he.hailong5 at zte.com.cn wrote: > I observe that the nginx runs with no error if there are > duplicate listen ports configured in the http server block or > stream server block. > > is this behavior as expected? and if a request comes at such a > port, which server would serve this request, by radomly or > round-robin? Duplicate listenining sockets are not allowed in stream{} and will return "duplicate "..." address and port pair in ..." if you'll try to configure duplicate sockets. In http{}, requests are routed based on the server names configured in the servers with the listening socket in question. See here for details: http://nginx.org/en/docs/http/request_processing.html -- Maxim Dounin http://nginx.org/ From lists at lazygranch.com Mon Jan 9 15:40:32 2017 From: lists at lazygranch.com (lists at lazygranch.com) Date: Mon, 09 Jan 2017 07:40:32 -0800 Subject: Performance test caps at 600 Mbit/s In-Reply-To: <87294562.QpLn3KW2WQ@vbart-laptop> References: <87294562.QpLn3KW2WQ@vbart-laptop> Message-ID: <20170109154032.5501011.81355.19526@lazygranch.com> ?FYI, benchmark mentioned in the video. https://github.com/wg/wrk Wouldn't a number of test machine ls on the Internet make more sense than flogging nginx locally on your network? With VPS time being sold by the hour, seems to me you should get one VPS tester running acceptably, then clone a dozen and do your test. With SSD based VPS, you can literally clone one a minute. ? Original Message ? From: Valentin V. Bartenev Sent: Monday, January 9, 2017 6:44 AM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: Performance test caps at 600 Mbit/s On Friday 06 January 2017 10:24:06 MrFastDie wrote: > Hello, > > the last days I played a little with the NGINX settings and the tcp stack to > test the best performance. I used direct connection between my testing > machine and my server using a cat5e cable. > My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc > My sysctl.conf also provides lot of changes: http://pastebin.com/KmPjEnHN > My /etc/security/limits.conf provides soft nofile at 1048576 and hard nofile > at the same value. > [..] You should configure worker_processes to use all cpu cores and disable accept_mutex. You can find more information in this talk: https://www.youtube.com/watch?v=eLW_NSuwYU0 wbr, Valentin V. Bartenev _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From zxcvbn4038 at gmail.com Mon Jan 9 16:38:45 2017 From: zxcvbn4038 at gmail.com (CJ Ess) Date: Mon, 9 Jan 2017 11:38:45 -0500 Subject: Performance test caps at 600 Mbit/s In-Reply-To: <6310d40dd6547aa03a25e4b78d63e1ce.NginxMailingListEnglish@forum.nginx.org> References: <6310d40dd6547aa03a25e4b78d63e1ce.NginxMailingListEnglish@forum.nginx.org> Message-ID: Out of the box Linux isn't optimized for high speed networking - if you google for linux and 1g or 10g you'll find a ton of pages about configuration changes. I think you'll want to do something like: ifconfig eth0 txqueuelen 10000 ethtool -G eth0 rx 4096 tx 4096 ip route | while read x; do ip route change $x initcwnd 10 initrwnd 10; done And these sysctl settings: net.ipv4.tcp_tw_reuse = 1 net.ipv4.tcp_fin_timeout = 10 net.ipv4.tcp_keepalive_time = 60 net.ipv4.tcp_keepalive_probes = 14 net.ipv4.tcp_keepalive_intvl = 15 net.ipv4.conf.all.accept_redirects = 0 net.ipv4.tcp_syncookies = 1 net.ipv4.ip_nonlocal_bind = 1 net.ipv4.tcp_sack = 1 net.ipv4.ip_local_port_range = 1024 65535 net.core.rmem_max = 33554432 net.core.wmem_max = 33554432 net.core.rmem_default = 524288 net.core.rmem_default = 524288 net.ipv4.tcp_rmem = 4096 524288 33554432 net.ipv4.tcp_wmem = 4096 524288 33554432 net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 net.ipv4.tcp_max_tw_buckets = 2000000 net.ipv4.tcp_max_syn_backlog = 65536 net.ipv4.tcp_max_orphans = 262144 net.core.netdev_max_backlog = 300000 net.core.somaxconn = 65536 net.ipv4.tcp_synack_retries = 2 net.ipv4.tcp_syn_retries = 2 net.ipv4.tcp_slow_start_after_idle = 0 At least these are the settings I've used for benchmarking, may do less or more on your setup. The NIC vendor can be really important - Intel NICs are really good and work really well with Linux. Broadcom NICs tend to flake out around line rate and are also notorious for dropping packets in hardware and not telling the kernel, so I try not to use those. On Mon, Jan 9, 2017 at 3:31 AM, MrFastDie wrote: > The OS of the server is debian 8, at my testing machine i'm using arch > linux. Both NIC supports the speed of 1000Mb/s the server got round about > 600 Mb/s up and 13Mb/s down. > > CJ Ess Wrote: > ------------------------------------------------------- > > Which OS? What NIC? You also have to consider the traffic source, is > > it > > known capable of of saturating the NIC on your server? > > > > On Fri, Jan 6, 2017 at 10:24 AM, MrFastDie > > > > wrote: > > > > > Hello, > > > > > > the last days I played a little with the NGINX settings and the tcp > > stack > > > to > > > test the best performance. I used direct connection between my > > testing > > > machine and my server using a cat5e cable. > > > My nginx.conf can be found at pastebin: http://pastebin.com/rRAEwvNc > > > My sysctl.conf also provides lot of changes: > > http://pastebin.com/KmPjEnHN > > > My /etc/security/limits.conf provides soft nofile at 1048576 and > > hard > > > nofile > > > at the same value. > > > > > > I'm not using any kind of firewall, all logs are disabled, neither > > the cpu > > > nor the ram is at it's limit, the i/o load seems fine to me too, but > > the > > > internet speed caps round about 600 Mbit/s. This test was made with > > 2000 > > > connections per time and a 1MB file. > > > > > > The server contains an Intel(R) Pentium(R) Dual CPU E2220 @ > > 2.40GHz and > > > has 3.5G of Ram. > > > My local computer contains an Intel(R) Core(TM) i7-4710HQ CPU @ > > 2.50GHz and > > > has 16G of Ram. > > > > > > The limits and tcp changes were also made on my local computer. > > > > > > Is there someone who can help me to deal with this problem? > > > > > > Posted at Nginx Forum: https://forum.nginx.org/read. > > > php?2,271846,271846#msg-271846 > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271846,271869#msg-271869 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lenaigst at maelenn.org Mon Jan 9 18:17:33 2017 From: lenaigst at maelenn.org (Thierry) Date: Mon, 9 Jan 2017 20:17:33 +0200 Subject: need help reverse-proxy config Message-ID: <207727426.20170109201733@maelenn.org> Dear, I have a reverse-proxy in front of my two servers: web (apache2) and email (nginx-iredmail). The proxy-reverse is perfectly working with my web server running Apache2, but I am not able to make it working for my email server. The reverse-proxy and the email server are both running with the same version of Nginx (1.9). I have tried many configs without any success. My last one: *********************************************************************** server { listen 446; server_name email.domain.ltd; location / { proxy_pass https://email_server_ip:446; proxy_ssl_certificate /etc/ssl/certs/cert.chained.crt; proxy_ssl_certificate_key /etc/ssl/private/private.key; proxy_ssl_protocols TLSv1 TLSv1.1 TLSv1.2; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/ssl/certs/cert.chained.crt; proxy_ssl_verify on; proxy_ssl_verify_depth 2; proxy_ssl_session_reuse on; error_log /var/log/nginx/error-proxy.log; access_log /var/log/nginx/access-proxy.log; } } Can I please have some help ?? Thx -- Cordialement, Thierry e-mail : lenaigst at maelenn.org PGP Key: 0xB7E3B9CD From nginx-forum at forum.nginx.org Mon Jan 9 18:43:51 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 09 Jan 2017 13:43:51 -0500 Subject: need help reverse-proxy config In-Reply-To: <207727426.20170109201733@maelenn.org> References: <207727426.20170109201733@maelenn.org> Message-ID: stream { limit_conn_zone $binary_remote_addr zone=straddr:10m; upstream backendsmtp { server smtp1.local:25; server smtp2.local:25; } server { listen 2025 ssl; error_log /logging/stream_local_smtp.log debug; ssl_certificate /nginx/crts/sdom.cert; ssl_certificate_key /nginx/crts/sdom.key; include /nginx/conf/sslciphers.conf; ssl_session_timeout 60m; ssl_handshake_timeout 10s; proxy_connect_timeout 10s; proxy_timeout 300s; proxy_pass backendsmtp; limit_conn straddr 15; limit_conn_log_level error; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271892#msg-271892 From ruz at sports.ru Mon Jan 9 18:56:54 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Mon, 9 Jan 2017 21:56:54 +0300 Subject: upstream timeouts I can not explain In-Reply-To: <20170109151419.GA1761@mdounin.ru> References: <20170109151419.GA1761@mdounin.ru> Message-ID: On Mon, Jan 9, 2017 at 6:14 PM, Maxim Dounin wrote: > Hello! > > On Sun, Jan 08, 2017 at 05:04:47PM +0300, ?????? ??????? wrote: > > > Hello, > > > > nginx 1.10.2 [1] running on FreeBSD 10.3-RELEASE-p7 > > > > We have quite a lot of "upstream timed out" errors. For example: > > > > 2017/01/08 16:20:54 [error] 82154#0: *494223064 upstream timed out (60: > > Operation timed out) while connecting to upstream, client: 192.168.1.43, > > server: q0.sports.ru, request: "POST /pub?topic=docs HTTP/1.1", > upstream: " > > http://192.168.1.206:4151/pub?topic=docs", host: "q0.sports.ru" > > Here is tcpdump: > > > > 16:20:53.218916 IP myself-fe.28957 > q0-2.4151: Flags [S], seq > 2479999411, > > win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 787079162 ecr 0], > > length 0 > > 16:20:53.219721 IP q0-2.4151 > myself-fe.28957: Flags [S.], seq > 605231398, > > ack 2479999412, win 28960, options [mss 1460,sackOK,TS val 352522447 ecr > > 787079162,nop,wscale 7], length 0 > > 16:20:53.219742 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 1, win > 1040, > > options [nop,nop,TS val 787079163 ecr 352522447], length 0 > > 16:20:54.572712 IP myself-fe.28957 > q0-2.4151: Flags [F.], seq 1, ack 1, > > win 1040, options [nop,nop,TS val 787080516 ecr 352522447], length 0 > > 16:20:54.573173 IP q0-2.4151 > myself-fe.28957: Flags [F.], seq 1, ack 2, > > win 227, options [nop,nop,TS val 352522786 ecr 787080516], length 0 > > 16:20:54.573195 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 2, win > 1040, > > options [nop,nop,TS val 787080516 ecr 352522786], length 0 > > > > What I don't see here is a packet with HTTP request. > > The "upstream timeout ... while connecting to upstream" suggests > that nginx wasn't able to see the connect event. > > [...] > Ok, spent whole day learning and investigating. > Any idea where I should look next? > > Some things to consider: > > - Make sure you are looking at tcpdump on the nginx host, and > there are no firewalls on the host to interfere with. > These were tcpdumps from nginx host. I have dump from other end and they are symmetrical. We have proxy_connect_timeout at 300ms at the top level of the config. When we first started to investigate it we increased timeout to 1s for this location. An hour ago I increased it to 5 seconds and finally couldn't reproduce the problem with a simple "bomber" script. >From dumps you can see that connection was established within 10ms. What can stop nginx from receiving the event for more than a second? This happens on all served domains as pretty much everywhere connect timeout is 300ms. If I tail -F error log, count this error occurrences grouped by second then I see 1-3 seconds spikes: silence or <5 errors for 20-40 seconds then ~200 errors in a few seconds. Is there anything that may block events processing nginx for quite a while? - Try collecting nginx debug logs, they should have enough > information to see if the event was reported by the kernel or > not. > As you can see from above it is reported by the kernel, but with quite big delay. Not sure if delay is in kernel or nginx, no delay on the network. > Is it some known issue? > > No. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 9 19:59:55 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 9 Jan 2017 22:59:55 +0300 Subject: upstream timeouts I can not explain In-Reply-To: References: <20170109151419.GA1761@mdounin.ru> Message-ID: <20170109195955.GH1761@mdounin.ru> Hello! On Mon, Jan 09, 2017 at 09:56:54PM +0300, ?????? ??????? wrote: > On Mon, Jan 9, 2017 at 6:14 PM, Maxim Dounin wrote: > > > Hello! > > > > On Sun, Jan 08, 2017 at 05:04:47PM +0300, ?????? ??????? wrote: > > > > > Hello, > > > > > > nginx 1.10.2 [1] running on FreeBSD 10.3-RELEASE-p7 > > > > > > We have quite a lot of "upstream timed out" errors. For example: > > > > > > 2017/01/08 16:20:54 [error] 82154#0: *494223064 upstream timed out (60: > > > Operation timed out) while connecting to upstream, client: 192.168.1.43, > > > server: q0.sports.ru, request: "POST /pub?topic=docs HTTP/1.1", > > upstream: " > > > http://192.168.1.206:4151/pub?topic=docs", host: "q0.sports.ru" > > > Here is tcpdump: > > > > > > 16:20:53.218916 IP myself-fe.28957 > q0-2.4151: Flags [S], seq > > 2479999411, > > > win 65535, options [mss 1460,nop,wscale 6,sackOK,TS val 787079162 ecr 0], > > > length 0 > > > 16:20:53.219721 IP q0-2.4151 > myself-fe.28957: Flags [S.], seq > > 605231398, > > > ack 2479999412, win 28960, options [mss 1460,sackOK,TS val 352522447 ecr > > > 787079162,nop,wscale 7], length 0 > > > 16:20:53.219742 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 1, win > > 1040, > > > options [nop,nop,TS val 787079163 ecr 352522447], length 0 > > > 16:20:54.572712 IP myself-fe.28957 > q0-2.4151: Flags [F.], seq 1, ack 1, > > > win 1040, options [nop,nop,TS val 787080516 ecr 352522447], length 0 > > > 16:20:54.573173 IP q0-2.4151 > myself-fe.28957: Flags [F.], seq 1, ack 2, > > > win 227, options [nop,nop,TS val 352522786 ecr 787080516], length 0 > > > 16:20:54.573195 IP myself-fe.28957 > q0-2.4151: Flags [.], ack 2, win > > 1040, > > > options [nop,nop,TS val 787080516 ecr 352522786], length 0 > > > > > > What I don't see here is a packet with HTTP request. > > > > The "upstream timeout ... while connecting to upstream" suggests > > that nginx wasn't able to see the connect event. > > > > [...] > > > > Ok, spent whole day learning and investigating. > > > Any idea where I should look next? > > > > Some things to consider: > > > > - Make sure you are looking at tcpdump on the nginx host, and > > there are no firewalls on the host to interfere with. > > > > These were tcpdumps from nginx host. I have dump from other end and they > are symmetrical. We have proxy_connect_timeout at 300ms at the top level of > the config. When we first started to investigate it we increased timeout to > 1s for > this location. An hour ago I increased it to 5 seconds and finally couldn't > reproduce > the problem with a simple "bomber" script. > > From dumps you can see that connection was established within 10ms. What > can stop nginx from receiving the event for more than a second? > > This happens on all served domains as pretty much everywhere connect > timeout is 300ms. If I tail -F error log, count this error occurrences > grouped by second then I see 1-3 seconds spikes: silence or <5 errors for > 20-40 seconds then ~200 errors in a few seconds. Is there anything that may > block events processing nginx for quite a while? Typical kern.sched.quantum is about 100ms, so several CPU-intensive tasks can delay processing of the events enough to trigger a timeout if a context switch happens at a bad time. Note well that various blocking operations in nginx itself - either disk or CPU-intensive ones - can also delay processing of various events, and this in turn can trigger unexpected timeouts when using timers comparable to a typical delay introduced on each event loop iteration. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon Jan 9 21:02:34 2017 From: nginx-forum at forum.nginx.org (pbooth) Date: Mon, 09 Jan 2017 16:02:34 -0500 Subject: Performance test caps at 600 Mbit/s In-Reply-To: <6310d40dd6547aa03a25e4b78d63e1ce.NginxMailingListEnglish@forum.nginx.org> References: <6310d40dd6547aa03a25e4b78d63e1ce.NginxMailingListEnglish@forum.nginx.org> Message-ID: > Both NIC supports the speed of 1000Mb/s How do you know? Your kernel or NIC config might be limiting you. iperf, snfnettest, or etherate will show you the maximum possible bandwidth at the TCP or IP layer. If it's under 700 then you know to focus on the NIC and OS. If it's above 900 then the problem is in your nginx or your test workload. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271846,271895#msg-271895 From oscaretu at gmail.com Mon Jan 9 21:36:20 2017 From: oscaretu at gmail.com (oscaretu .) Date: Mon, 9 Jan 2017 22:36:20 +0100 Subject: Performance test caps at 600 Mbit/s In-Reply-To: References: <6310d40dd6547aa03a25e4b78d63e1ce.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, I don't find anything in Google about snfnettest Can you confirm that the name is OK? Kind regards, Oscar On Mon, Jan 9, 2017 at 10:02 PM, pbooth wrote: > > Both NIC supports the speed of 1000Mb/s > > How do you know? Your kernel or NIC config might be limiting you. > > iperf, snfnettest, or etherate will show you the maximum possible > bandwidth at the TCP or IP layer. > If it's under 700 then you know to focus on the NIC and OS. If it's above > 900 then the > problem is in your nginx or your test workload. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271846,271895#msg-271895 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 10 04:47:38 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Mon, 09 Jan 2017 23:47:38 -0500 Subject: need help reverse-proxy config In-Reply-To: References: <207727426.20170109201733@maelenn.org> Message-ID: <9ccb928a882f8218c2ee5a7cec2ac0db.NginxMailingListEnglish@forum.nginx.org> proxy nginx[20076]: nginx: [emerg] "stream" directive is not allowed here in /etc/nginx/conf.d/reverse-proxy.conf:47 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271897#msg-271897 From anoopalias01 at gmail.com Tue Jan 10 04:50:42 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 10 Jan 2017 10:20:42 +0530 Subject: need help reverse-proxy config In-Reply-To: <9ccb928a882f8218c2ee5a7cec2ac0db.NginxMailingListEnglish@forum.nginx.org> References: <207727426.20170109201733@maelenn.org> <9ccb928a882f8218c2ee5a7cec2ac0db.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://nginx.org/en/docs/stream/ngx_stream_core_module.html#stream stream should be in the main context. On Tue, Jan 10, 2017 at 10:17 AM, Thierry wrote: > proxy nginx[20076]: nginx: [emerg] "stream" directive is not allowed here > in > /etc/nginx/conf.d/reverse-proxy.conf:47 > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271891,271897#msg-271897 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 10 06:04:15 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Tue, 10 Jan 2017 01:04:15 -0500 Subject: need help reverse-proxy config In-Reply-To: References: Message-ID: <7495648a87a029c622180f908adcface.NginxMailingListEnglish@forum.nginx.org> error_log /var/log/nginx/error.log info; events { worker_connections 1024; } stream { upstream backend { hash xxx.xxx.xxx.xxx consistent; server email.domain.tld:448; } server { listen 448; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass backend; } } I have difficulties to understand the "main context" idea .... With this exemple, is my "stream" in the right context ?? Seems not. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271899#msg-271899 From anoopalias01 at gmail.com Tue Jan 10 06:11:05 2017 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 10 Jan 2017 11:41:05 +0530 Subject: need help reverse-proxy config In-Reply-To: <7495648a87a029c622180f908adcface.NginxMailingListEnglish@forum.nginx.org> References: <7495648a87a029c622180f908adcface.NginxMailingListEnglish@forum.nginx.org> Message-ID: main context means it come directly in nginx.conf http context means it should be put inside http{ } server context means it should be in server { } likewise.. You can search the directive like http://nginx.org/r/xxxxx_xxxx For eg: http://nginx.org/r/stream check the context where that directive is applicable ..since stream says main..if you put like http{ stream .. .. } it will be invalid syntax . On Tue, Jan 10, 2017 at 11:34 AM, Thierry wrote: > error_log /var/log/nginx/error.log info; > > events { > worker_connections 1024; > } > > stream { > upstream backend { > hash xxx.xxx.xxx.xxx consistent; > > server email.domain.tld:448; > } > > > server { > listen 448; > proxy_connect_timeout 1s; > proxy_timeout 3s; > proxy_pass backend; > } > } > > I have difficulties to understand the "main context" idea .... With this > exemple, is my "stream" in the right context ?? Seems not. > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271891,271899#msg-271899 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 10 06:53:29 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Tue, 10 Jan 2017 01:53:29 -0500 Subject: need help reverse-proxy config In-Reply-To: References: Message-ID: Thx a lot ... I do understand better now. In my nginx.conf I do have: ***** stream { limit_conn_zone $binary_remote_addr zone=straddr:10m; upstream backendmail { server email.domain.tld:448; } } ***** In my server.conf I do have: ***** server { listen 448 ssl; error_log /var/log/nginx/error-proxy_mail.log debug; ssl on; ssl_certificate /etc/ssl/certs/cert.org.chained.crt; ssl_certificate_key /etc/ssl/private/iRedMail.key; include /etc/nginx/sslciphers.conf; ssl_session_timeout 60m; ssl_handshake_timeout 10s; proxy_connect_timeout 10s; proxy_timeout 300s; proxy_pass backendmail; limit_conn straddr 15; limit_conn_log_level error; } ****** When tying to run the server: nginx: [emerg] "ssl_handshake_timeout" directive is not allowed If removing this directive : nginx: [emerg] "proxy_timeout" directive is not allowed here etc ... I did respect the context this time. Thx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271901#msg-271901 From vl at nginx.com Tue Jan 10 15:19:29 2017 From: vl at nginx.com (Vladimir Homutov) Date: Tue, 10 Jan 2017 18:19:29 +0300 Subject: stream module on 100% cpu load In-Reply-To: <20170103142038.Horde.rAxhfHMCPMZ3D73rldqchQI@andreasschulze.de> References: <20170103142038.Horde.rAxhfHMCPMZ3D73rldqchQI@andreasschulze.de> Message-ID: 03.01.2017 16:20, A. Schulze ?????: > > Hello, > > last days I setup a server to encapsulate DNS over TLS. > > - DNS-Server @localhost, Port 53 TCP > - NGINX Stream module on public IP, Port 853 TCP, SSL enabled. > > That work so far. > Now I thought to scan this setup using ssllabs.com > > I shutdown my HTTPS webserver an let nginx stream module listen on port > 443. > To make it easier I switched also the stream proxy target to ::1, Port 80 > Now I could again access my website but not via nginx ssl but nginx > stream module. > Work also so far... > > Now I pointed SSLlasbs to the server and ... surprise! > > The scan terminate with "Assessment failed: Unexpected failure" > last loglines nginx wrote was: > > 2017/01/03 13:26:49 [info] 19253#0: *25 client > [2600:c02:1020:4202::ac10:8267]:50918 connected to [2001:db8::53]:443 > 2017/01/03 13:26:49 [info] 19253#0: *25 proxy [2001:db8::53]:42534 > connected to [::1]:80 > 2017/01/03 13:26:50 [notice] 19253#0: *25 SSL renegotiation disabled > while proxying connection, client: 2600:c02:1020:4202::ac10:8267, > server: [2001:db8::53]:443, upstream: "[::1]:80", bytes from/to > client:138/0, bytes from/to upstream:0/138 > > The nginx process stop responding and eat up 100% cpu time. > > After reading again > http://nginx.org/en/docs/stream/ngx_stream_ssl_module.html > I added "worker_processes auto;" to nginx.conf. > > That changed the picture a little bit. > The ssllabs scan do no longer terminate but finish with a usual result. > Still one nginx process consume 100% cpu time. > > I guess there is something broken with my setup or nginx. What further > information are needed to nail down the problem? > > Andreas Thank you for reporting. You may try the following patch: diff --git a/src/stream/ngx_stream_proxy_module.c b/src/stream/ngx_stream_proxy_module.c --- a/src/stream/ngx_stream_proxy_module.c +++ b/src/stream/ngx_stream_proxy_module.c @@ -1564,6 +1564,7 @@ ngx_stream_proxy_process(ngx_stream_sess return; } + src->read->ready = 0; src->read->eof = 1; n = 0; } From nginx-forum at forum.nginx.org Tue Jan 10 16:22:53 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Tue, 10 Jan 2017 11:22:53 -0500 Subject: need help reverse-proxy config In-Reply-To: <207727426.20170109201733@maelenn.org> References: <207727426.20170109201733@maelenn.org> Message-ID: I am still debugging a bit: 2017/01/10 18:17:59 [debug] 5174#5174: accept mutex lock failed: 0 2017/01/10 18:17:59 [debug] 5174#5174: epoll timer: 500 2017/01/10 18:17:59 [debug] 5172#5172: epoll: fd:13 ev:0005 d:00007F81B6D351D0 2017/01/10 18:17:59 [debug] 5172#5172: *1 http keepalive handler 2017/01/10 18:17:59 [debug] 5172#5172: *1 malloc: 00007F81B7A8B320:1024 2017/01/10 18:17:59 [debug] 5172#5172: *1 SSL_read: 461 2017/01/10 18:17:59 [debug] 5172#5172: *1 SSL_read: -1 2017/01/10 18:17:59 [debug] 5172#5172: *1 SSL_get_error: 2 2017/01/10 18:17:59 [debug] 5172#5172: *1 reusable connection: 0 2017/01/10 18:17:59 [debug] 5172#5172: *1 posix_memalign: 00007F81B7A98570:4096 @16 2017/01/10 18:17:59 [debug] 5172#5172: *1 event timer del: 13: 1484065126843 2017/01/10 18:17:59 [debug] 5172#5172: *1 http process request line 2017/01/10 18:17:59 [debug] 5172#5172: *1 http request line: "GET /SOGo/ HTTP/1.1" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http uri: "/SOGo/" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http args: "" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http exten: "" 2017/01/10 18:17:59 [debug] 5172#5172: *1 posix_memalign: 00007F81B7A8D750:4096 @16 2017/01/10 18:17:59 [debug] 5172#5172: *1 http process request header line 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Host: email_server.domain.tld:port_number" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Connection: keep-alive" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Cache-Control: max-age=0" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Upgrade-Insecure-Requests: 1" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "User-Agent: Mozilla/5.0 (Linux; Android 6.0.1; Nexus 5 Build/MMB29T) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/54.0.2840.85 Mobile Safari/537.36" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/webp,*/*;q=0.8" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Accept-Encoding: gzip, deflate, sdch, br" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header: "Accept-Language: fr-FR,fr;q=0.8,en-US;q=0.6,en;q=0.4" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http header done 2017/01/10 18:17:59 [debug] 5172#5172: *1 generic phase: 0 2017/01/10 18:17:59 [debug] 5172#5172: *1 rewrite phase: 1 2017/01/10 18:17:59 [debug] 5172#5172: *1 test location: "/" 2017/01/10 18:17:59 [debug] 5172#5172: *1 using configuration "/" 2017/01/10 18:17:59 [debug] 5172#5172: *1 http cl:-1 max:1048576 2017/01/10 18:17:59 [debug] 5172#5172: *1 rewrite phase: 3 2017/01/10 18:17:59 [debug] 5172#5172: *1 post rewrite phase: 4 2017/01/10 18:17:59 [debug] 5172#5172: *1 generic phase: 5 2017/01/10 18:17:59 [debug] 5172#5172: *1 generic phase: 6 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271909#msg-271909 From nginx-forum at forum.nginx.org Tue Jan 10 16:44:45 2017 From: nginx-forum at forum.nginx.org (woodyweaver) Date: Tue, 10 Jan 2017 11:44:45 -0500 Subject: CRL validation Message-ID: I need to use nginx with client validation. Lots of good info about that. But I need to ensure that nginx verifies the certificate has not been revoked through CRL or OCSP checking. Is that part of ssl_verify_client on ? How can I specify a cached CRL location? --woody Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271910,271910#msg-271910 From ruz at sports.ru Tue Jan 10 17:46:39 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Tue, 10 Jan 2017 20:46:39 +0300 Subject: upstream timeouts I can not explain In-Reply-To: <20170109195955.GH1761@mdounin.ru> References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> Message-ID: > > > > The "upstream timeout ... while connecting to upstream" suggests > > > that nginx wasn't able to see the connect event. > > > > > > [...] > > > > > > Some things to consider: > > > > > > - Make sure you are looking at tcpdump on the nginx host, and > > > there are no firewalls on the host to interfere with. > > > > > > > These were tcpdumps from nginx host. I have dump from other end and they > > are symmetrical. We have proxy_connect_timeout at 300ms at the top level > of > > the config. When we first started to investigate it we increased timeout > to > > 1s for > > this location. An hour ago I increased it to 5 seconds and finally > couldn't > > reproduce > > the problem with a simple "bomber" script. > > > > From dumps you can see that connection was established within 10ms. What > > can stop nginx from receiving the event for more than a second? > > > > This happens on all served domains as pretty much everywhere connect > > timeout is 300ms. If I tail -F error log, count this error occurrences > > grouped by second then I see 1-3 seconds spikes: silence or <5 errors for > > 20-40 seconds then ~200 errors in a few seconds. Is there anything that > may > > block events processing nginx for quite a while? > > Typical kern.sched.quantum is about 100ms, so several > CPU-intensive tasks can delay processing of the events enough to > trigger a timeout if a context switch happens at a bad time. > > Note well that various blocking operations in nginx itself - > either disk or CPU-intensive ones - can also delay processing of > various events, and this in turn can trigger unexpected timeouts > when using timers comparable to a typical delay introduced on each > event loop iteration. We tuned upstreams' parameters to avoid both backend servers marked as unavailable during these spikes. This prevents bogus errors. Also, in this particular service I'm experimenting with keepalive connections between nginx and upstreams. Above steps don't solve root cause. Can you suggest me further steps to localize the issue? I'm not sure how to detect if it's blocking operation in nginx, OS scheduling or something else. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Tue Jan 10 19:13:37 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 10 Jan 2017 14:13:37 -0500 Subject: upstream timeouts I can not explain In-Reply-To: References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> Message-ID: All hosts have characteristic stalls and blips but the scale of this issue can vary 100x depending on is configuration. You can get some data about these stalls using solar flare's sysjitter utility or Gil Tene's jhiccup. Sent from my iPhone On Jan 10, 2017, at 12:46 PM, ?????? ??????? wrote: >> > > The "upstream timeout ... while connecting to upstream" suggests >> > > that nginx wasn't able to see the connect event. >> > > >> > > [...] >> > > >> > > Some things to consider: >> > > >> > > - Make sure you are looking at tcpdump on the nginx host, and >> > > there are no firewalls on the host to interfere with. >> > > >> > >> > These were tcpdumps from nginx host. I have dump from other end and they >> > are symmetrical. We have proxy_connect_timeout at 300ms at the top level of >> > the config. When we first started to investigate it we increased timeout to >> > 1s for >> > this location. An hour ago I increased it to 5 seconds and finally couldn't >> > reproduce >> > the problem with a simple "bomber" script. >> > >> > From dumps you can see that connection was established within 10ms. What >> > can stop nginx from receiving the event for more than a second? >> > >> > This happens on all served domains as pretty much everywhere connect >> > timeout is 300ms. If I tail -F error log, count this error occurrences >> > grouped by second then I see 1-3 seconds spikes: silence or <5 errors for >> > 20-40 seconds then ~200 errors in a few seconds. Is there anything that may >> > block events processing nginx for quite a while? >> >> Typical kern.sched.quantum is about 100ms, so several >> CPU-intensive tasks can delay processing of the events enough to >> trigger a timeout if a context switch happens at a bad time. >> >> Note well that various blocking operations in nginx itself - >> either disk or CPU-intensive ones - can also delay processing of >> various events, and this in turn can trigger unexpected timeouts >> when using timers comparable to a typical delay introduced on each >> event loop iteration. > > We tuned upstreams' parameters to avoid both backend servers marked as unavailable during these spikes. This prevents bogus errors. > > Also, in this particular service I'm experimenting with keepalive connections between nginx and upstreams. > > Above steps don't solve root cause. Can you suggest me further steps to localize the issue? I'm not sure how to detect if it's blocking operation in nginx, OS scheduling or something else. > > -- > ?????? ??????? > ???????????? ?????? ?????????? ???-???????? > +7(916) 597-92-69, ruz @ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 10 20:42:24 2017 From: nginx-forum at forum.nginx.org (vegetax) Date: Tue, 10 Jan 2017 15:42:24 -0500 Subject: Rewrite Message-ID: Hi need some help I am load balancing my syslog traffic from my WAF device to ngix server below and the servers in the pool are servers running rsyslog currently the issue is when the logs hit the nginx server it re-writes the source host name for example below in logs you see "nginx_vm" but you should be "WAF01". Does any one have any suggestions to have this stop happening # Nginx VM "nginx_vm" stream { upstream splunk_backend { server 192.168.1.31:514; server 192.168.1.32:514; } server { listen 192.168.2.2:514; listen 514 udp; proxy_connect_timeout 1s; proxy_timeout 10m; proxy_pass splunk_backend; proxy_buffer_size 64k; proxy_next_upstream_timeout 1; error_log /var/log/nginx/splunk.log info; } } # MY IMPERVA WAF device "WAF01" Jan 5 13:54:17 nginx_vm CEF: 0|Imperva Inc.|SecureSphere|11.0.0.3_0|Profile|unauthorized-http-req-content-t|Low|act=alert dst=10.10.240.35 dpt=80 duser=${Alert.username} src=41.104.58.1 spt=20872 proto=TCP rt=05 January 2017 1 8:54:17 cs1=Web Profile Policy cs1Label=Policy Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271913,271913#msg-271913 From jtgeyser at gmail.com Tue Jan 10 21:55:52 2017 From: jtgeyser at gmail.com (Jonathan Geyser) Date: Tue, 10 Jan 2017 13:55:52 -0800 Subject: Nginx not honoring keepalive / multiple requests to http backend over single TCP session Message-ID: Hi guys, I'm attempting to have multiple requests to a backend reuse the same TCP session as to avoid handshaking for each subsequent request. Nginx appears to send FIN ACK to the backend after every request. Am I doing something wrong? Here is the current configuration: https://paste.ngx.cc/6c24411681f24790 Thanks in advance, Jonathan -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Tue Jan 10 22:26:20 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 10 Jan 2017 23:26:20 +0100 Subject: Nginx not honoring keepalive / multiple requests to http backend over single TCP session In-Reply-To: References: Message-ID: The FIN ACK suggests that the other side is responsible for closing the connection. If nginx was terminating the connection, there would be no ACK bit set. Check your upstream server supports keepalive. On Tue, Jan 10, 2017 at 10:55 PM, Jonathan Geyser wrote: > Hi guys, > > I'm attempting to have multiple requests to a backend reuse the same TCP > session as to avoid handshaking for each subsequent request. Nginx appears > to send FIN ACK to the backend after every request. > > Am I doing something wrong? > > Here is the current configuration: https://paste.ngx.cc/6c24411681f24790 > > > Thanks in advance, > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alex at samad.com.au Wed Jan 11 00:42:04 2017 From: alex at samad.com.au (Alex Samad) Date: Wed, 11 Jan 2017 11:42:04 +1100 Subject: CRL validation In-Reply-To: References: Message-ID: Hi I have a cron script that generates a crl file and places it a file for nginx to read... I believe I reload nginx after doing this I don't think - happy to be proved wrong - that nginx checks for a oscp or crl attribute in the cert and makes the relevant request Alex On 11 January 2017 at 03:44, woodyweaver wrote: > I need to use nginx with client validation. Lots of good info about that. > But I need to ensure that nginx verifies the certificate has not been > revoked through CRL or OCSP checking. Is that part of ssl_verify_client on > ? How can I specify a cached CRL location? > > --woody > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271910,271910#msg-271910 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 11 06:06:56 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Wed, 11 Jan 2017 01:06:56 -0500 Subject: need help reverse-proxy config In-Reply-To: References: <207727426.20170109201733@maelenn.org> Message-ID: <0ec7dc47b8a715cb874a18c661c37ab6.NginxMailingListEnglish@forum.nginx.org> seems to be link to my ssl certificate ... Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271919#msg-271919 From nginx-forum at forum.nginx.org Wed Jan 11 12:39:32 2017 From: nginx-forum at forum.nginx.org (bdesemb) Date: Wed, 11 Jan 2017 07:39:32 -0500 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: <20130817023328.GZ2130@mdounin.ru> References: <20130817023328.GZ2130@mdounin.ru> Message-ID: <8dab3ad7f846a957dc0e2883718f01fb.NginxMailingListEnglish@forum.nginx.org> Hi Maxim, Can you post an example please? I don't understand how to do that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,241734,271925#msg-271925 From mdounin at mdounin.ru Wed Jan 11 13:13:38 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jan 2017 16:13:38 +0300 Subject: Rewrite In-Reply-To: References: Message-ID: <20170111131338.GE47718@mdounin.ru> Hello! On Tue, Jan 10, 2017 at 03:42:24PM -0500, vegetax wrote: > Hi need some help I am load balancing my syslog traffic from my WAF device > to > ngix server below and the servers in the pool are servers running rsyslog > currently the issue is when the logs hit the nginx server it re-writes the > source host name for example below in logs you see "nginx_vm" but you should > be "WAF01". > Does any one have any suggestions to have this stop happening It looks like your rsyslog is configured to log name of the system it got the message from instead of the hostname from the syslog message. Check your rsyslog configuration. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jan 11 14:33:32 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jan 2017 17:33:32 +0300 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: <8dab3ad7f846a957dc0e2883718f01fb.NginxMailingListEnglish@forum.nginx.org> References: <20130817023328.GZ2130@mdounin.ru> <8dab3ad7f846a957dc0e2883718f01fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170111143332.GG47718@mdounin.ru> Hello! On Wed, Jan 11, 2017 at 07:39:32AM -0500, bdesemb wrote: > Hi Maxim, > > Can you post an example please? I don't understand how to do that. Try something like this: server { server 8080; location / { # no caching configured, only proxying to an intermediate # caching layer; X-Accel-Redirect is processed here proxy_pass http://127.0.0.1:8081; } } server { server 8081; location / { # here caching happens; to cache X-Accel-Redirect # responses, the header is ignored proxy_pass http://real-upstream-server; proxy_cache foo; proxy_ignore_headers X-Accel-Redirect; } } -- Maxim Dounin http://nginx.org/ From ruz at sports.ru Wed Jan 11 15:07:01 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 11 Jan 2017 18:07:01 +0300 Subject: upstream timeouts I can not explain In-Reply-To: <20170109195955.GH1761@mdounin.ru> References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> Message-ID: On Mon, Jan 9, 2017 at 10:59 PM, Maxim Dounin wrote: > Typical kern.sched.quantum is about 100ms, so several > CPU-intensive tasks can delay processing of the events enough to > trigger a timeout if a context switch happens at a bad time. > Here what I see in truss' output: 38.820523207 0.000006568 kevent(28,{ },0,{ 198,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821405071 },512,{ 6.215000000 }) = 1 (0x1) 39.783094188 0.000022875 kevent(28,{ },0,{ 52,EVFILT_READ,0x0,0x0,0x30b,0x81f800068 204,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821401588 51,EVFILT_READ,0x0,0x0,0xec,0x81f800000 68,EVFILT_READ,EV_CLEAR,0x0,0x8bf,0x81f816580 7,EVFILT_READ,EV_CLEAR,0x0,0x27f,0x81f813869 57,EVFILT_READ,EV_CLEAR,0x0,0x767,0x81f817bd8 203,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x248,0x81f8030c1 181,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x9b77,0x81f80ea68 178,EVFILT_READ,EV_CLEAR,0x0,0x39d,0x81f8010a9 198,EVFILT_READ,EV_CLEAR,0x0,0x3d3,0x81f805071 204,EVFILT_READ,EV_CLEAR,0x0,0x9da,0x81f801588 190,EVFILT_READ,EV_CLEAR,0x0,0x4ff,0x81f80fc48 154,EVFILT_READ,EV_CLEAR,0x0,0x88e,0x81f8130b1 151,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xc1db,0x81f814290 157,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xe841,0x81f80c029 195,EVFILT_READ,EV_CLEAR,0x0,0x952,0x81f8090a1 194,EVFILT_READ,EV_CLEAR,0x0,0x929,0x81f809ac8 201,EVFILT_READ,EV_CLEAR,0x0,0x4ef,0x81f80c980 174,EVFILT_READ,EV_CLEAR,0x0,0x51e,0x81f816518 77,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x1168,0x81f811c61 },512,{ 5.253000000 }) = 20 (0x14) 1 second delay between two syscalls. Then nginx goes nuts processing all it missed during this second. I can not tell from this output how much time was spent in these syscalls. Can anyone? What I don't like is timeout greater than 5 seconds. Doesn't it mean that system is allowed to block for timeout time to collect events? -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 11 15:44:22 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jan 2017 18:44:22 +0300 Subject: upstream timeouts I can not explain In-Reply-To: References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> Message-ID: <20170111154422.GH47718@mdounin.ru> Hello! On Wed, Jan 11, 2017 at 06:07:01PM +0300, ?????? ??????? wrote: > On Mon, Jan 9, 2017 at 10:59 PM, Maxim Dounin wrote: > > > Typical kern.sched.quantum is about 100ms, so several > > CPU-intensive tasks can delay processing of the events enough to > > trigger a timeout if a context switch happens at a bad time. > > > > Here what I see in truss' output: > > 38.820523207 0.000006568 kevent(28,{ },0,{ > 198,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821405071 },512,{ 6.215000000 }) = 1 > (0x1) > 39.783094188 0.000022875 kevent(28,{ },0,{ > 52,EVFILT_READ,0x0,0x0,0x30b,0x81f800068 > 204,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821401588 > 51,EVFILT_READ,0x0,0x0,0xec,0x81f800000 > 68,EVFILT_READ,EV_CLEAR,0x0,0x8bf,0x81f816580 > 7,EVFILT_READ,EV_CLEAR,0x0,0x27f,0x81f813869 > 57,EVFILT_READ,EV_CLEAR,0x0,0x767,0x81f817bd8 > 203,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x248,0x81f8030c1 > 181,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x9b77,0x81f80ea68 > 178,EVFILT_READ,EV_CLEAR,0x0,0x39d,0x81f8010a9 > 198,EVFILT_READ,EV_CLEAR,0x0,0x3d3,0x81f805071 > 204,EVFILT_READ,EV_CLEAR,0x0,0x9da,0x81f801588 > 190,EVFILT_READ,EV_CLEAR,0x0,0x4ff,0x81f80fc48 > 154,EVFILT_READ,EV_CLEAR,0x0,0x88e,0x81f8130b1 > 151,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xc1db,0x81f814290 > 157,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xe841,0x81f80c029 > 195,EVFILT_READ,EV_CLEAR,0x0,0x952,0x81f8090a1 > 194,EVFILT_READ,EV_CLEAR,0x0,0x929,0x81f809ac8 > 201,EVFILT_READ,EV_CLEAR,0x0,0x4ef,0x81f80c980 > 174,EVFILT_READ,EV_CLEAR,0x0,0x51e,0x81f816518 > 77,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x1168,0x81f811c61 },512,{ 5.253000000 > }) = 20 (0x14) > > > 1 second delay between two syscalls. Then nginx goes nuts processing all it > missed during this second. I can not tell from this output how much time > was spent in these syscalls. Can anyone? Using ktrace / kdump might be a better option, it shows both syscall enter and syscall return with exact timestamps. > What I don't like is timeout greater than 5 seconds. Doesn't it mean that > system is allowed to block for timeout time to collect events? Timeout as passed to kevent() is the nearest timer set in nginx. That is, nginx has nothing to do till the time specified, and allows kernel to block that long if there are no events. Kernel is expected to return events as soon as it has any, or return 0 if the specified time limit expires. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Jan 11 16:10:51 2017 From: nginx-forum at forum.nginx.org (bdesemb) Date: Wed, 11 Jan 2017 11:10:51 -0500 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: <20170111143332.GG47718@mdounin.ru> References: <20170111143332.GG47718@mdounin.ru> Message-ID: Thanks for your help but there is still something I don't understand. I have dynamic access for resources. For me, I have 3 different locations in 2 servers and your conf only contains 2. I have my location of the client request, the location of the X-Accel-Redirect and the location where caching happens. I tried this but it's not working. I got a 200 with an empty body. server { listen 80 location /data { proxy_pass http://127.0.0.1:8081; } location /protected_data { internal; alias /var/local/data/; } } listen 8081 { location / { proxy_pass http://127.0.0.1:8000; #upstream server proxy_cache my_cache; proxy_ignore_headers X-Accel-Redirect; } } What am I doing wrong? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,241734,271935#msg-271935 From dernikov1 at gmail.com Wed Jan 11 16:18:26 2017 From: dernikov1 at gmail.com (Bike dernikov1) Date: Wed, 11 Jan 2017 17:18:26 +0100 Subject: Beginner question:Nginx request_uri meaning ? Message-ID: Hi, i have "simple" question, need simple explanation. It's driving me nuts. In nginx configuration what is meaning of $request_uri in line? ********************************************************* return 301 $scheme://example.com1$request_uri; *********************************************************** In documentation write: $request_uri is full request URI. I will try to describe my doubth. Simple request URL: http://www.example.com/index.html Full request URI is the same: http://example.com/index.html $request_uri=http://example.com/index.html. As i understand then line: return 301 $scheme://example1.com$request_uri; must return: http://example1.comhttp://example.com/index.html. But that cannot be correct. So what mean var $request_uri ? Is defined wrong in documentation. (or URI is not what i described ?) or it mean something different, or it mean something different in combination with return ?? Thanks for help. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 11 17:23:08 2017 From: nginx-forum at forum.nginx.org (bdesemb) Date: Wed, 11 Jan 2017 12:23:08 -0500 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: <20170111143332.GG47718@mdounin.ru> References: <20170111143332.GG47718@mdounin.ru> Message-ID: I want to add some clarification. My client calls is like "/app" and the server respond with a name. My file is stored at /var/local/data/. I want to cache the response. So if I have another request on /app, Nginx should respond with the cached version of the file. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,241734,271938#msg-271938 From mdounin at mdounin.ru Wed Jan 11 17:28:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jan 2017 20:28:47 +0300 Subject: Beginner question:Nginx request_uri meaning ? In-Reply-To: References: Message-ID: <20170111172847.GJ47718@mdounin.ru> Hello! On Wed, Jan 11, 2017 at 05:18:26PM +0100, Bike dernikov1 wrote: > Hi, i have "simple" question, need simple explanation. It's driving me > nuts. > > In nginx configuration what is meaning of $request_uri in line? > > ********************************************************* > return 301 $scheme://example.com1$request_uri; > *********************************************************** > In documentation write: $request_uri is full request URI. > I will try to describe my doubth. > > Simple request URL: http://www.example.com/index.html > > Full request URI is the same: http://example.com/index.html > > $request_uri=http://example.com/index.html. > > As i understand then line: > > return 301 $scheme://example1.com$request_uri; > > must return: > > http://example1.comhttp://example.com/index.html. > > But that cannot be correct. > > So what mean var $request_uri ? Is defined wrong in documentation. (or URI > is not what i described ?) or it mean something different, or it mean > something different in combination with return ?? > Thanks for help. The term "request URI" as used in the nginx documentation in many places, as well as in various variables, dates back to the original and most common HTTP meaning - the URI form as used identify a resource on a server. Quoting https://tools.ietf.org/html/rfc1945#section-5.1.2: The most common form of Request-URI is that used to identify a resource on an origin server or gateway. In this case, only the absolute path of the URI is transmitted (see Section 3.2.1, abs_path). For example, a client wishing to retrieve the resource above directly from the origin server would create a TCP connection to port 80 of the host "www.w3.org" and send the line: GET /pub/WWW/TheProject.html HTTP/1.0 followed by the remainder of the Full-Request. Note that the absolute path cannot be empty; if none is present in the original URI, it must be given as "/" (the server root). At the HTTP/1.0 time this was the only allowed form in requests to origin servers (absolute form was only allowed in requests to a proxy). With HTTP/1.1 absolute form can be also used in normal requests, but it's not something actually used in practice, and also not something various configurations and software can cope with. So even if a request uses the absolute form of the request URI, nginx provides $request_uri as if it was given in the abs_path form. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed Jan 11 17:38:20 2017 From: nginx-forum at forum.nginx.org (Thierry) Date: Wed, 11 Jan 2017 12:38:20 -0500 Subject: need help reverse-proxy config In-Reply-To: <207727426.20170109201733@maelenn.org> References: <207727426.20170109201733@maelenn.org> Message-ID: <6b969f16352dc931636a92b1506ed2c9.NginxMailingListEnglish@forum.nginx.org> I gave up ... No fresh ideas anymore :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271891,271940#msg-271940 From mdounin at mdounin.ru Wed Jan 11 17:42:32 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jan 2017 20:42:32 +0300 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: References: <20170111143332.GG47718@mdounin.ru> Message-ID: <20170111174232.GK47718@mdounin.ru> Hello! On Wed, Jan 11, 2017 at 12:23:08PM -0500, bdesemb wrote: > I want to add some clarification. My client calls is like "/app" and the > server respond with a name. My file is stored at /var/local/data/. I > want to cache the response. So if I have another request on /app, Nginx > should respond with the cached version of the file. It is not clear what do you mean by "cached version of the file", and what exactly do you want to cache. Original question you are referring to was about caching responses with the X-Accel-Redirect header. Files as referenced in these X-Accel-Redirect responses are expected to exist permanently. The question was how to cache responses with X-Accel-Redirect header to avoid asking backend application each time, and return files directly instead (using cached X-Accel-Redirect responses). I've provided the configuration which does this. If in your case files are generated by your app and then removed, then this configuration will not work for you. Instead, you need to cache full responses at the frontend level, and use an additional backend nginx to resolve X-Accel-Redirect redirections. That is, something like this: server { server 8080; location / { # here caching happens proxy_pass http://127.0.0.1:8081; proxy_cache foo; } } server { server 8081; location / { # here X-Accel-Redirect is processed proxy_pass http://real-upstream-server; } location /path/to/files { # a location to access files after X-Accel-Redirect ... } } -- Maxim Dounin http://nginx.org/ From ruz at sports.ru Wed Jan 11 17:54:31 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 11 Jan 2017 20:54:31 +0300 Subject: upstream timeouts I can not explain In-Reply-To: <20170111154422.GH47718@mdounin.ru> References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> <20170111154422.GH47718@mdounin.ru> Message-ID: On Wed, Jan 11, 2017 at 6:44 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 11, 2017 at 06:07:01PM +0300, ?????? ??????? wrote: > > > On Mon, Jan 9, 2017 at 10:59 PM, Maxim Dounin > wrote: > > > > > Typical kern.sched.quantum is about 100ms, so several > > > CPU-intensive tasks can delay processing of the events enough to > > > trigger a timeout if a context switch happens at a bad time. > > > > > > > Here what I see in truss' output: > > > > 38.820523207 0.000006568 kevent(28,{ },0,{ > > 198,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821405071 },512,{ 6.215000000 }) > = 1 > > (0x1) > > 39.783094188 0.000022875 kevent(28,{ },0,{ > > 52,EVFILT_READ,0x0,0x0,0x30b,0x81f800068 > > 204,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821401588 > > 51,EVFILT_READ,0x0,0x0,0xec,0x81f800000 > > 68,EVFILT_READ,EV_CLEAR,0x0,0x8bf,0x81f816580 > > 7,EVFILT_READ,EV_CLEAR,0x0,0x27f,0x81f813869 > > 57,EVFILT_READ,EV_CLEAR,0x0,0x767,0x81f817bd8 > > 203,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x248,0x81f8030c1 > > 181,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x9b77,0x81f80ea68 > > 178,EVFILT_READ,EV_CLEAR,0x0,0x39d,0x81f8010a9 > > 198,EVFILT_READ,EV_CLEAR,0x0,0x3d3,0x81f805071 > > 204,EVFILT_READ,EV_CLEAR,0x0,0x9da,0x81f801588 > > 190,EVFILT_READ,EV_CLEAR,0x0,0x4ff,0x81f80fc48 > > 154,EVFILT_READ,EV_CLEAR,0x0,0x88e,0x81f8130b1 > > 151,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xc1db,0x81f814290 > > 157,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xe841,0x81f80c029 > > 195,EVFILT_READ,EV_CLEAR,0x0,0x952,0x81f8090a1 > > 194,EVFILT_READ,EV_CLEAR,0x0,0x929,0x81f809ac8 > > 201,EVFILT_READ,EV_CLEAR,0x0,0x4ef,0x81f80c980 > > 174,EVFILT_READ,EV_CLEAR,0x0,0x51e,0x81f816518 > > 77,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x1168,0x81f811c61 },512,{ > 5.253000000 > > }) = 20 (0x14) > > > > > > 1 second delay between two syscalls. Then nginx goes nuts processing all > it > > missed during this second. I can not tell from this output how much time > > was spent in these syscalls. Can anyone? > > Using ktrace / kdump might be a better option, it shows both > syscall enter and syscall return with exact timestamps. > Tried. Found at least one issue with sendfile blocking: 66193 nginx 1484152162.182601 CALL openat(AT_FDCWD,0x80639aac4,0x4,0) 66193 nginx 1484152162.182607 NAMI ... 66193 nginx 1484152162.182643 RET openat 40/0x28 ... gz lookup and stats ... 66193 nginx 1484152162.182683 CALL setsockopt(0x211,0x6,0x4,0x7fffffffd96c,0x4) 66193 nginx 1484152162.182687 RET setsockopt 0 66193 nginx 1484152162.182689 CALL sendfile(0x28,0x211,0,0x806f,0x7fffffffe1d8,0x7fffffffe240,) 66193 nginx 1484152163.541770 RET sendfile 0 Sendfile blocks for 1.3 seconds. However, it's: $ sysctl hw.model hw.machine hw.ncpu hw.model: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz hw.machine: amd64 hw.ncpu: 16 So sfbufs don't apply here, but documentation doesn't tell what resource applies... or I couldn't find correct doc. > What I don't like is timeout greater than 5 seconds. Doesn't it mean that > > system is allowed to block for timeout time to collect events? > > Timeout as passed to kevent() is the nearest timer set in nginx. > That is, nginx has nothing to do till the time specified, and > allows kernel to block that long if there are no events. Kernel > is expected to return events as soon as it has any, or return 0 if > the specified time limit expires. > This I figured from reading nginx source code. -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 11 18:40:15 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 11 Jan 2017 21:40:15 +0300 Subject: upstream timeouts I can not explain In-Reply-To: References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> <20170111154422.GH47718@mdounin.ru> Message-ID: <20170111184014.GM47718@mdounin.ru> Hello! On Wed, Jan 11, 2017 at 08:54:31PM +0300, ?????? ??????? wrote: > On Wed, Jan 11, 2017 at 6:44 PM, Maxim Dounin wrote: > > > Hello! > > > > On Wed, Jan 11, 2017 at 06:07:01PM +0300, ?????? ??????? wrote: > > > > > On Mon, Jan 9, 2017 at 10:59 PM, Maxim Dounin > > wrote: > > > > > > > Typical kern.sched.quantum is about 100ms, so several > > > > CPU-intensive tasks can delay processing of the events enough to > > > > trigger a timeout if a context switch happens at a bad time. > > > > > > > > > > Here what I see in truss' output: > > > > > > 38.820523207 0.000006568 kevent(28,{ },0,{ > > > 198,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821405071 },512,{ 6.215000000 }) > > = 1 > > > (0x1) > > > 39.783094188 0.000022875 kevent(28,{ },0,{ > > > 52,EVFILT_READ,0x0,0x0,0x30b,0x81f800068 > > > 204,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821401588 > > > 51,EVFILT_READ,0x0,0x0,0xec,0x81f800000 > > > 68,EVFILT_READ,EV_CLEAR,0x0,0x8bf,0x81f816580 > > > 7,EVFILT_READ,EV_CLEAR,0x0,0x27f,0x81f813869 > > > 57,EVFILT_READ,EV_CLEAR,0x0,0x767,0x81f817bd8 > > > 203,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x248,0x81f8030c1 > > > 181,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x9b77,0x81f80ea68 > > > 178,EVFILT_READ,EV_CLEAR,0x0,0x39d,0x81f8010a9 > > > 198,EVFILT_READ,EV_CLEAR,0x0,0x3d3,0x81f805071 > > > 204,EVFILT_READ,EV_CLEAR,0x0,0x9da,0x81f801588 > > > 190,EVFILT_READ,EV_CLEAR,0x0,0x4ff,0x81f80fc48 > > > 154,EVFILT_READ,EV_CLEAR,0x0,0x88e,0x81f8130b1 > > > 151,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xc1db,0x81f814290 > > > 157,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xe841,0x81f80c029 > > > 195,EVFILT_READ,EV_CLEAR,0x0,0x952,0x81f8090a1 > > > 194,EVFILT_READ,EV_CLEAR,0x0,0x929,0x81f809ac8 > > > 201,EVFILT_READ,EV_CLEAR,0x0,0x4ef,0x81f80c980 > > > 174,EVFILT_READ,EV_CLEAR,0x0,0x51e,0x81f816518 > > > 77,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x1168,0x81f811c61 },512,{ > > 5.253000000 > > > }) = 20 (0x14) > > > > > > > > > 1 second delay between two syscalls. Then nginx goes nuts processing all > > it > > > missed during this second. I can not tell from this output how much time > > > was spent in these syscalls. Can anyone? > > > > Using ktrace / kdump might be a better option, it shows both > > syscall enter and syscall return with exact timestamps. > > > > Tried. Found at least one issue with sendfile blocking: > > 66193 nginx 1484152162.182601 CALL > openat(AT_FDCWD,0x80639aac4,0x4,0) > 66193 nginx 1484152162.182607 NAMI ... > 66193 nginx 1484152162.182643 RET openat 40/0x28 > ... gz lookup and stats ... > 66193 nginx 1484152162.182683 CALL > setsockopt(0x211,0x6,0x4,0x7fffffffd96c,0x4) > 66193 nginx 1484152162.182687 RET setsockopt 0 > 66193 nginx 1484152162.182689 CALL > sendfile(0x28,0x211,0,0x806f,0x7fffffffe1d8,0x7fffffffe240,) > 66193 nginx 1484152163.541770 RET sendfile 0 > > Sendfile blocks for 1.3 seconds. However, it's: > > $ sysctl hw.model hw.machine hw.ncpu > hw.model: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz > hw.machine: amd64 > hw.ncpu: 16 > > So sfbufs don't apply here, but documentation doesn't tell what resource > applies... or I couldn't find correct doc. On amd64 sendfile() uses mbufs / mbuf clusters. Try looking into "vmstat -z" to see if there are enough mbuf clusters in various zones. Note well tha sendfile() normally blocks on disk, and it might simply mean that your disk subsystem is (occasionally) overloaded. Try gstat to see some details. Either way, blocking for 1.3s in a syscall perfectly explains why 300ms timeouts are sometimes triggered unexpectedly. If a timer is set after such a blocking call and just before kevent() (that is, connection is highly unlikely to be established in before kevent() returns), on the next event loop iteraction nginx will think that 1.3+ seconds has passed since the timer was set, and will trigger a timeout (after processing all the events reported by kevent()). -- Maxim Dounin http://nginx.org/ From ruz at sports.ru Wed Jan 11 19:25:31 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Wed, 11 Jan 2017 22:25:31 +0300 Subject: upstream timeouts I can not explain In-Reply-To: <20170111184014.GM47718@mdounin.ru> References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> <20170111154422.GH47718@mdounin.ru> <20170111184014.GM47718@mdounin.ru> Message-ID: 11 ??? 2017 ?. 21:40 ???????????? "Maxim Dounin" ???????: Hello! On Wed, Jan 11, 2017 at 08:54:31PM +0300, ?????? ??????? wrote: > On Wed, Jan 11, 2017 at 6:44 PM, Maxim Dounin wrote: > > > Hello! > > > > On Wed, Jan 11, 2017 at 06:07:01PM +0300, ?????? ??????? wrote: > > > > > On Mon, Jan 9, 2017 at 10:59 PM, Maxim Dounin > > wrote: > > > > > > > Typical kern.sched.quantum is about 100ms, so several > > > > CPU-intensive tasks can delay processing of the events enough to > > > > trigger a timeout if a context switch happens at a bad time. > > > > > > > > > > Here what I see in truss' output: > > > > > > 38.820523207 0.000006568 kevent(28,{ },0,{ > > > 198,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821405071 },512,{ 6.215000000 }) > > = 1 > > > (0x1) > > > 39.783094188 0.000022875 kevent(28,{ },0,{ > > > 52,EVFILT_READ,0x0,0x0,0x30b,0x81f800068 > > > 204,EVFILT_WRITE,EV_CLEAR,0x0,0x8218,0x821401588 > > > 51,EVFILT_READ,0x0,0x0,0xec,0x81f800000 > > > 68,EVFILT_READ,EV_CLEAR,0x0,0x8bf,0x81f816580 > > > 7,EVFILT_READ,EV_CLEAR,0x0,0x27f,0x81f813869 > > > 57,EVFILT_READ,EV_CLEAR,0x0,0x767,0x81f817bd8 > > > 203,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x248,0x81f8030c1 > > > 181,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x9b77,0x81f80ea68 > > > 178,EVFILT_READ,EV_CLEAR,0x0,0x39d,0x81f8010a9 > > > 198,EVFILT_READ,EV_CLEAR,0x0,0x3d3,0x81f805071 > > > 204,EVFILT_READ,EV_CLEAR,0x0,0x9da,0x81f801588 > > > 190,EVFILT_READ,EV_CLEAR,0x0,0x4ff,0x81f80fc48 > > > 154,EVFILT_READ,EV_CLEAR,0x0,0x88e,0x81f8130b1 > > > 151,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xc1db,0x81f814290 > > > 157,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0xe841,0x81f80c029 > > > 195,EVFILT_READ,EV_CLEAR,0x0,0x952,0x81f8090a1 > > > 194,EVFILT_READ,EV_CLEAR,0x0,0x929,0x81f809ac8 > > > 201,EVFILT_READ,EV_CLEAR,0x0,0x4ef,0x81f80c980 > > > 174,EVFILT_READ,EV_CLEAR,0x0,0x51e,0x81f816518 > > > 77,EVFILT_READ,EV_CLEAR|EV_EOF,0x0,0x1168,0x81f811c61 },512,{ > > 5.253000000 > > > }) = 20 (0x14) > > > > > > > > > 1 second delay between two syscalls. Then nginx goes nuts processing all > > it > > > missed during this second. I can not tell from this output how much time > > > was spent in these syscalls. Can anyone? > > > > Using ktrace / kdump might be a better option, it shows both > > syscall enter and syscall return with exact timestamps. > > > > Tried. Found at least one issue with sendfile blocking: > > 66193 nginx 1484152162.182601 CALL > openat(AT_FDCWD,0x80639aac4,0x4,0) > 66193 nginx 1484152162.182607 NAMI ... > 66193 nginx 1484152162.182643 RET openat 40/0x28 > ... gz lookup and stats ... > 66193 nginx 1484152162.182683 CALL > setsockopt(0x211,0x6,0x4,0x7fffffffd96c,0x4) > 66193 nginx 1484152162.182687 RET setsockopt 0 > 66193 nginx 1484152162.182689 CALL > sendfile(0x28,0x211,0,0x806f,0x7fffffffe1d8,0x7fffffffe240,) > 66193 nginx 1484152163.541770 RET sendfile 0 > > Sendfile blocks for 1.3 seconds. However, it's: > > $ sysctl hw.model hw.machine hw.ncpu > hw.model: Intel(R) Xeon(R) CPU E5620 @ 2.40GHz > hw.machine: amd64 > hw.ncpu: 16 > > So sfbufs don't apply here, but documentation doesn't tell what resource > applies... or I couldn't find correct doc. On amd64 sendfile() uses mbufs / mbuf clusters. Try looking into "vmstat -z" to see if there are enough mbuf clusters in various zones. Note well tha sendfile() normally blocks on disk, and it might simply mean that your disk subsystem is (occasionally) overloaded. Try gstat to see some details. It is goes red periodically and i should look further there. At this moment i wonder if turning off sendfile will help. Will nginx use aio to serve static files from disk without blocking? Also, sendfile syscall has two flags on bsd to avoid blocking. Can these be used to avoid blocks? ' Either way, blocking for 1.3s in a syscall perfectly explains why 300ms timeouts are sometimes triggered unexpectedly. If a timer is set after such a blocking call and just before kevent() (that is, connection is highly unlikely to be established in before kevent() returns), on the next event loop iteraction nginx will think that 1.3+ seconds has passed since the timer was set, and will trigger a timeout (after processing all the events reported by kevent()). -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx ' ? ??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruz at sports.ru Wed Jan 11 21:04:13 2017 From: ruz at sports.ru (=?UTF-8?B?0KDRg9GB0LvQsNC9INCX0LDQutC40YDQvtCy?=) Date: Thu, 12 Jan 2017 00:04:13 +0300 Subject: upstream timeouts I can not explain In-Reply-To: References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> <20170111154422.GH47718@mdounin.ru> <20170111184014.GM47718@mdounin.ru> Message-ID: On Wed, Jan 11, 2017 at 10:25 PM, ?????? ??????? wrote: > > On amd64 sendfile() uses mbufs / mbuf clusters. Try looking into > "vmstat -z" to see if there are enough mbuf clusters in various > zones. Note well tha sendfile() normally blocks on disk, and it > might simply mean that your disk subsystem is (occasionally) > overloaded. Try gstat to see some details. > > > It is goes red periodically and i should look further there. > It was disk subsystem. atime updates were enabled (needed them for one thing), kern.filedelay is 30s, top -mio -S showed syncer jumping up every 30seconds. Re-mounting disk with noatime dropped number of errors down to <10 per minute from 200+. At this moment i wonder if turning off sendfile will help. Will nginx use > aio to serve static files from disk without blocking? > Still open question. Also, sendfile syscall has two flags on bsd to avoid blocking. Can these be > used to avoid blocks? > Looks like BSD 11 has improvements in this area. At least `man` says that sendfile doesn't block on disk I/O and can return earlier with success. -- ?????? ??????? ???????????? ?????? ?????????? ???-???????? +7(916) 597-92-69, ruz @ -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Wed Jan 11 21:15:37 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 12 Jan 2017 00:15:37 +0300 Subject: upstream timeouts I can not explain In-Reply-To: References: <20170109151419.GA1761@mdounin.ru> <20170109195955.GH1761@mdounin.ru> <20170111154422.GH47718@mdounin.ru> <20170111184014.GM47718@mdounin.ru> Message-ID: <7090eb6a-6751-c929-4b05-0629391db69c@nginx.com> On 1/12/17 12:04 AM, ?????? ??????? wrote: > Looks like BSD 11 has improvements in this area. At least `man` says > that sendfile doesn't block on disk I/O and can return earlier with > success. That's correct observation. https://www.nginx.com/blog/nginx-and-netflix-contribute-new-sendfile2-to-freebsd/ -- Maxim Konovalov From simowitz at google.com Wed Jan 11 21:27:51 2017 From: simowitz at google.com (Jonathan Simowitz) Date: Wed, 11 Jan 2017 16:27:51 -0500 Subject: Behavior between upstream hash and backup Message-ID: Hello, I would like to define an upstream block with a number of servers and utilize the hash directive to choose a particular server dependent on the request. There is a chance that the chosen server could fail and so I would also like to configure a backup server to handle the request in this case. If I define an additional server in this upstream and declare it as backup will it handle the request if the hash-chosen server fails as defined by my proxy_next_upstream directive? If so, great! If not, what is the recommended way to achieve this desired behavior? Thank you, ~Jonathan -- Jonathan Simowitz | Jigsaw | Software Engineer | simowitz at google.com | 631-223-8608 -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Wed Jan 11 21:34:04 2017 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 11 Jan 2017 13:34:04 -0800 Subject: Behavior between upstream hash and backup In-Reply-To: References: Message-ID: Hi, On Wed, Jan 11, 2017 at 1:27 PM, Jonathan Simowitz via nginx < nginx at nginx.org> wrote: > Hello, > > I would like to define an upstream block with a number of servers and > utilize the hash directive to choose a particular server dependent on the > request. There is a chance that the chosen server could fail and so I would > also like to configure a backup server to handle the request in this case. > If I define an additional server in this upstream and declare it as backup > will it handle the request if the hash-chosen server fails as defined by my > proxy_next_upstream directive? > Have you had a look at the 'backup' parameter of http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server? Sounds like exactly what you need. -------------- next part -------------- An HTML attachment was scrubbed... URL: From simowitz at google.com Wed Jan 11 21:37:36 2017 From: simowitz at google.com (Jonathan Simowitz) Date: Wed, 11 Jan 2017 16:37:36 -0500 Subject: Behavior between upstream hash and backup In-Reply-To: References: Message-ID: Hi Robert, Thank you for your reply. Yes, I have taken the backup parameter into account. It is not clear to me how the backup parameter works in combination with the hash directive and whether this achieves the desired behavior. ~Jonathan On Wed, Jan 11, 2017 at 4:34 PM, Robert Paprocki < rpaprocki at fearnothingproductions.net> wrote: > Hi, > > On Wed, Jan 11, 2017 at 1:27 PM, Jonathan Simowitz via nginx < > nginx at nginx.org> wrote: > >> Hello, >> >> I would like to define an upstream block with a number of servers and >> utilize the hash directive to choose a particular server dependent on the >> request. There is a chance that the chosen server could fail and so I would >> also like to configure a backup server to handle the request in this case. >> If I define an additional server in this upstream and declare it as backup >> will it handle the request if the hash-chosen server fails as defined by my >> proxy_next_upstream directive? >> > > Have you had a look at the 'backup' parameter of http://nginx.org/en/docs/ > http/ngx_http_upstream_module.html#server? Sounds like exactly what you > need. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Jonathan Simowitz | Jigsaw | Software Engineer | simowitz at google.com | 631-223-8608 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dernikov1 at gmail.com Thu Jan 12 08:43:35 2017 From: dernikov1 at gmail.com (Bike dernikov1) Date: Thu, 12 Jan 2017 09:43:35 +0100 Subject: Beginner question:Nginx request_uri meaning ? In-Reply-To: <20170111172847.GJ47718@mdounin.ru> References: <20170111172847.GJ47718@mdounin.ru> Message-ID: Hi, Thanks for answer and detail explanation. So to conclude and confirm that i undestand completly. If client request http://www.example.com/index.html return 301 $scheme://example1.com$request_uri; mean: return 301 http://example1.com/index.html. Thanks again for big help. I wold never found explanation myself. On Wed, Jan 11, 2017 at 6:28 PM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 11, 2017 at 05:18:26PM +0100, Bike dernikov1 wrote: > > > Hi, i have "simple" question, need simple explanation. It's driving me > > nuts. > > > > In nginx configuration what is meaning of $request_uri in line? > > > > ********************************************************* > > return 301 $scheme://example.com1$request_uri; > > *********************************************************** > > In documentation write: $request_uri is full request URI. > > I will try to describe my doubth. > > > > Simple request URL: http://www.example.com/index.html > > > > Full request URI is the same: http://example.com/index.html > > > > $request_uri=http://example.com/index.html. > > > > As i understand then line: > > > > return 301 $scheme://example1.com$request_uri; > > > > must return: > > > > http://example1.comhttp://example.com/index.html. > > > > But that cannot be correct. > > > > So what mean var $request_uri ? Is defined wrong in documentation. (or > URI > > is not what i described ?) or it mean something different, or it mean > > something different in combination with return ?? > > Thanks for help. > > The term "request URI" as used in the nginx documentation in many > places, as well as in various variables, dates back to the > original and most common HTTP meaning - the URI form as used > identify a resource on a server. > > Quoting https://tools.ietf.org/html/rfc1945#section-5.1.2: > > The most common form of Request-URI is that used to identify a > resource on an origin server or gateway. In this case, only the > absolute path of the URI is transmitted (see Section 3.2.1, > abs_path). For example, a client wishing to retrieve the resource > above directly from the origin server would create a TCP connection > to port 80 of the host "www.w3.org" and send the line: > > GET /pub/WWW/TheProject.html HTTP/1.0 > > followed by the remainder of the Full-Request. Note that the absolute > path cannot be empty; if none is present in the original URI, it must > be given as "/" (the server root). > > At the HTTP/1.0 time this was the only allowed form in requests to > origin servers (absolute form was only allowed in requests to a > proxy). > > With HTTP/1.1 absolute form can be also used in normal > requests, but it's not something actually used in practice, and > also not something various configurations and software can cope > with. So even if a request uses the absolute form of the request > URI, nginx provides $request_uri as if it was given in the > abs_path form. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 12 11:26:38 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Thu, 12 Jan 2017 06:26:38 -0500 Subject: SSL Offloading in UDP load Message-ID: <3c110163d75cac1742c0258ac83ccfb9.NginxMailingListEnglish@forum.nginx.org> Hi, Does the opensource Nginx support DTLS offloading when acting as an UDP loadbalancer? Thanks, Santos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271957,271957#msg-271957 From maxim at nginx.com Thu Jan 12 11:30:52 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 12 Jan 2017 14:30:52 +0300 Subject: SSL Offloading in UDP load In-Reply-To: <3c110163d75cac1742c0258ac83ccfb9.NginxMailingListEnglish@forum.nginx.org> References: <3c110163d75cac1742c0258ac83ccfb9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <421e9a46-3963-433d-9f6f-6198a2203a2b@nginx.com> Hello, On 1/12/17 2:26 PM, nginxsantos wrote: > Hi, > > Does the opensource Nginx support DTLS offloading when acting as an UDP > loadbalancer? > It doesn't. -- Maxim Konovalov From v.kozyrev.sa at gmail.com Thu Jan 12 12:12:30 2017 From: v.kozyrev.sa at gmail.com (Vladimir Kozyrev) Date: Thu, 12 Jan 2017 15:12:30 +0300 Subject: Adjusting server_names_hash_bucket_size before nginx throws an error Message-ID: My issue is that I add server {} and upstream {} directives dynamically to /etc/nginx/conf.d/dynamic_vhost.conf. Once the file grew, nginx stopped working and I increased hash_bucket_size parameter to make it work again. Is there a way to calculate what value should I set to hash_bucket_size parameter for a given number of server {} and upstream {} directives? At some point, the file will grow big enough to stop nginx from working once again. Any advice on how can I prevent that from happening will be appreciated. Thanks, Vladimir -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicktgr15 at gmail.com Thu Jan 12 12:18:28 2017 From: nicktgr15 at gmail.com (Nikolaos Tsipas) Date: Thu, 12 Jan 2017 12:18:28 +0000 Subject: Nginx record length during disk IO Message-ID: Hello, We're load testing nginx focusing on disk IO performance and we're trying to understand what's the record length used during caching operations. The reason why we'd like to know the utilised record length is that we could then use similar settings during our SSDs load tests. We had a look at the code, specifically at the ngx_write_file method but we think that the record size used (i.e. sizeof(ngx_http_file_cache_header_t) is too small (~40 bytes) to be the one we are looking for. Any useful information around that area would be appreciated as well as suggestions on where that information might be in the nginx codebase. Regards, Nik -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 12 12:54:20 2017 From: nginx-forum at forum.nginx.org (bdesemb) Date: Thu, 12 Jan 2017 07:54:20 -0500 Subject: proxy_cache seems not working with X-Accel-Redirect In-Reply-To: <20170111143332.GG47718@mdounin.ru> References: <20170111143332.GG47718@mdounin.ru> Message-ID: <9f5e25c7e21c04c2751faa0079e36109.NginxMailingListEnglish@forum.nginx.org> I made a diagram to be more specific. It's not a problem if the client ask for /data and get an older version of the file. The invalidate option in proxy_cache is enough for me. Here is my diagram without cache: http://imgur.com/a/soq69 Obviously, I don't want to reach the upstream server every time but only if the file hasn't been reached after x minutes. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,241734,271962#msg-271962 From mdounin at mdounin.ru Thu Jan 12 13:40:13 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jan 2017 16:40:13 +0300 Subject: Beginner question:Nginx request_uri meaning ? In-Reply-To: References: <20170111172847.GJ47718@mdounin.ru> Message-ID: <20170112134013.GN47718@mdounin.ru> Hello! On Thu, Jan 12, 2017 at 09:43:35AM +0100, Bike dernikov1 wrote: > Hi, > Thanks for answer and detail explanation. > So to conclude and confirm that i undestand completly. > If client request > > http://www.example.com/index.html > > return 301 $scheme://example1.com$request_uri; > > mean: return 301 http://example1.com/index.html. Yes, $request_uri would be "/index.html" in this case. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 12 14:44:03 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jan 2017 17:44:03 +0300 Subject: Behavior between upstream hash and backup In-Reply-To: References: Message-ID: <20170112144403.GO47718@mdounin.ru> Hello! On Wed, Jan 11, 2017 at 04:27:51PM -0500, Jonathan Simowitz via nginx wrote: > Hello, > > I would like to define an upstream block with a number of servers and > utilize the hash directive to choose a particular server dependent on the > request. There is a chance that the chosen server could fail and so I would > also like to configure a backup server to handle the request in this case. If there are more than one server in the upstream block, the default behaviour is to try another server if a choosen one fails, see http://nginx.org/r/proxy_next_upstream. The exact algorithm to select "another server" varies depending on the balancing algorithm used - e.g., in case of "hash .. consistent" it means selecting the next point in the continuum, like if the failed server was removed from the list. > If I define an additional server in this upstream and declare it as backup > will it handle the request if the hash-chosen server fails as defined by my > proxy_next_upstream directive? No, backup servers are not supported by the hash balancer. -- Maxim Dounin http://nginx.org/ From simowitz at google.com Thu Jan 12 15:03:47 2017 From: simowitz at google.com (Jonathan Simowitz) Date: Thu, 12 Jan 2017 10:03:47 -0500 Subject: Behavior between upstream hash and backup In-Reply-To: <20170112144403.GO47718@mdounin.ru> References: <20170112144403.GO47718@mdounin.ru> Message-ID: Thank you Maxim; that is exactly the detailed response I was looking for. If possible I would recommend updating the docs to clarify this for others. http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash ~Jonathan On Thu, Jan 12, 2017 at 9:44 AM, Maxim Dounin wrote: > Hello! > > On Wed, Jan 11, 2017 at 04:27:51PM -0500, Jonathan Simowitz via nginx > wrote: > > > Hello, > > > > I would like to define an upstream block with a number of servers and > > utilize the hash directive to choose a particular server dependent on the > > request. There is a chance that the chosen server could fail and so I > would > > also like to configure a backup server to handle the request in this > case. > > If there are more than one server in the upstream block, the > default behaviour is to try another server if a choosen one fails, > see http://nginx.org/r/proxy_next_upstream. The exact algorithm > to select "another server" varies depending on the balancing > algorithm used - e.g., in case of "hash .. consistent" it means > selecting the next point in the continuum, like if the failed > server was removed from the list. > > > If I define an additional server in this upstream and declare it as > backup > > will it handle the request if the hash-chosen server fails as defined by > my > > proxy_next_upstream directive? > > No, backup servers are not supported by the hash balancer. > > -- > Maxim Dounin > http://nginx.org/ > -- Jonathan Simowitz | Jigsaw | Software Engineer | simowitz at google.com | 631-223-8608 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 12 15:38:28 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jan 2017 18:38:28 +0300 Subject: Adjusting server_names_hash_bucket_size before nginx throws an error In-Reply-To: References: Message-ID: <20170112153827.GQ47718@mdounin.ru> Hello! On Thu, Jan 12, 2017 at 03:12:30PM +0300, Vladimir Kozyrev wrote: > My issue is that I add server {} and upstream {} directives dynamically to > /etc/nginx/conf.d/dynamic_vhost.conf. > Once the file grew, nginx stopped working and I increased hash_bucket_size > parameter to make it work again. > Is there a way to calculate what value should I set to hash_bucket_size > parameter for a given number of server {} and upstream {} directives? At > some point, the file will grow big enough to stop nginx from working once > again. > Any advice on how can I prevent that from happening will be appreciated. The value of server_names_hash_bucket_size limits the maximum length of a server_name used. If you use long server names, set it to the maximum name length expected (plus some additional space for two pointers). Everything else is not fatal when constructing hashes since nginx 1.5.13. If nginx fails to build an optimal hash with bucket_size and max_size configured, it simply ignores bucket_size and logs an appropriate warning (non-fatal). Some additional information about configuring hashes can be found at http://nginx.org/en/docs/hash.html. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 12 15:44:39 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jan 2017 18:44:39 +0300 Subject: Nginx record length during disk IO In-Reply-To: References: Message-ID: <20170112154439.GR47718@mdounin.ru> Hello! On Thu, Jan 12, 2017 at 12:18:28PM +0000, Nikolaos Tsipas wrote: > Hello, > > We're load testing nginx focusing on disk IO performance and we're trying > to understand what's the record length used during caching operations. The > reason why we'd like to know the utilised record length is that we could > then use similar settings during our SSDs load tests. > > We had a look at the code, specifically at the ngx_write_file method but we > think that the record size used (i.e. sizeof(ngx_http_file_cache_header_t) > is too small (~40 bytes) to be the one we are looking for. > > Any useful information around that area would be appreciated as well as > suggestions on where that information might be in the nginx codebase. In most cases buffer size is probably what you are looking for. Depending on the particular operation it may be either proxy_buffer_size, or proxy_buffers, or output_buffers. -- Maxim Dounin http://nginx.org/ From maxim at nginx.com Thu Jan 12 15:58:35 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 12 Jan 2017 18:58:35 +0300 Subject: SSL Offloading in UDP load In-Reply-To: <421e9a46-3963-433d-9f6f-6198a2203a2b@nginx.com> References: <3c110163d75cac1742c0258ac83ccfb9.NginxMailingListEnglish@forum.nginx.org> <421e9a46-3963-433d-9f6f-6198a2203a2b@nginx.com> Message-ID: <323e4e45-98ac-1ae2-4102-875266e63d8b@nginx.com> On 1/12/17 2:30 PM, Maxim Konovalov wrote: > Hello, > > On 1/12/17 2:26 PM, nginxsantos wrote: >> Hi, >> >> Does the opensource Nginx support DTLS offloading when acting as an UDP >> loadbalancer? >> > It doesn't. > Btw, it would be useful for us to learn more your specific use case for this feature. So, I'd be grateful if you share that with us. Thanks, Maxim -- Maxim Konovalov From nginx-forum at forum.nginx.org Thu Jan 12 16:57:58 2017 From: nginx-forum at forum.nginx.org (malloc813) Date: Thu, 12 Jan 2017 11:57:58 -0500 Subject: Set ssl_session_tickets each virtual host is unable? Message-ID: Hi, I tested nginx configuration and got one problem. For example, I made 2 virtual hosts. They are SSL enabled server. http { #host1 server { ... ssl_sesstion_tickets off; ... } #host2 { ... ssl_session_tickets on; ... } } Visit host1 after apply this configuration, chrome shows an error ERR_SSL_PROTOCOL_ERROR Is it impossible to set ssl_session_tickets differently each virtual host? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271971,271971#msg-271971 From mdounin at mdounin.ru Thu Jan 12 19:20:20 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 Jan 2017 22:20:20 +0300 Subject: Set ssl_session_tickets each virtual host is unable? In-Reply-To: References: Message-ID: <20170112192020.GT47718@mdounin.ru> Hello! On Thu, Jan 12, 2017 at 11:57:58AM -0500, malloc813 wrote: > Hi, I tested nginx configuration and got one problem. > For example, I made 2 virtual hosts. They are SSL enabled server. > > http > { > #host1 > server > { > ... > ssl_sesstion_tickets off; > ... > } > > #host2 > { > ... > ssl_session_tickets on; > ... > } > > } > > Visit host1 after apply this configuration, chrome shows an error > ERR_SSL_PROTOCOL_ERROR Works fine here. The ERR_SSL_PROTOCOL_ERROR is likely caused by other problems in the configuration. First of all try "nginx -t" to see if there are obvious errors in your config. > Is it impossible to set ssl_session_tickets differently each virtual host? No. Session resumption happens in the context of the default server, and it is not possible to have different session cache / session tickets settings in virtual hosts. In the above configuration session tickets will be off for both servers (assuming they are listening on the same ip/port and the first one is the default). -- Maxim Dounin http://nginx.org/ From jtgeyser at gmail.com Thu Jan 12 19:23:56 2017 From: jtgeyser at gmail.com (Jonathan Geyser) Date: Thu, 12 Jan 2017 11:23:56 -0800 Subject: Nginx not honoring keepalive / multiple requests to http backend over single TCP session In-Reply-To: References: Message-ID: Richard, On further investigation -- it looks like the client was closing the front-end connection. I need the back-end socket to remain open regardless of what the front-end is doing. Is there a way to accomplish this? Thanks in advance, Jonathan On Tue, Jan 10, 2017 at 2:26 PM, Richard Stanway wrote: > The FIN ACK suggests that the other side is responsible for closing the > connection. If nginx was terminating the connection, there would be no ACK > bit set. Check your upstream server supports keepalive. > > On Tue, Jan 10, 2017 at 10:55 PM, Jonathan Geyser > wrote: > >> Hi guys, >> >> I'm attempting to have multiple requests to a backend reuse the same TCP >> session as to avoid handshaking for each subsequent request. Nginx appears >> to send FIN ACK to the backend after every request. >> >> Am I doing something wrong? >> >> Here is the current configuration: https://paste.ngx.cc/6c24411681f24790 >> >> >> Thanks in advance, >> Jonathan >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From v.kozyrev.sa at gmail.com Thu Jan 12 20:28:53 2017 From: v.kozyrev.sa at gmail.com (Vladimir Kozyrev) Date: Thu, 12 Jan 2017 23:28:53 +0300 Subject: Adjusting server_names_hash_bucket_size before nginx throws an error In-Reply-To: <20170112153827.GQ47718@mdounin.ru> References: <20170112153827.GQ47718@mdounin.ru> Message-ID: What a relief! Thanks Maxim! On Thu, Jan 12, 2017 at 6:38 PM, Maxim Dounin wrote: > Hello! > > On Thu, Jan 12, 2017 at 03:12:30PM +0300, Vladimir Kozyrev wrote: > > > My issue is that I add server {} and upstream {} directives dynamically > to > > /etc/nginx/conf.d/dynamic_vhost.conf. > > Once the file grew, nginx stopped working and I increased > hash_bucket_size > > parameter to make it work again. > > Is there a way to calculate what value should I set to hash_bucket_size > > parameter for a given number of server {} and upstream {} directives? At > > some point, the file will grow big enough to stop nginx from working once > > again. > > Any advice on how can I prevent that from happening will be appreciated. > > The value of server_names_hash_bucket_size limits the maximum > length of a server_name used. If you use long server names, set > it to the maximum name length expected (plus some additional space > for two pointers). > > Everything else is not fatal when constructing hashes since nginx > 1.5.13. If nginx fails to build an optimal hash with bucket_size > and max_size configured, it simply ignores bucket_size and logs an > appropriate warning (non-fatal). > > Some additional information about configuring hashes can be found > at http://nginx.org/en/docs/hash.html. > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From steven.hartland at multiplay.co.uk Thu Jan 12 23:33:59 2017 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Thu, 12 Jan 2017 23:33:59 +0000 Subject: Nginx not honoring keepalive / multiple requests to http backend over single TCP session In-Reply-To: References: Message-ID: <307dbce0-99af-f073-c803-16f48687d83e@multiplay.co.uk> I believe you want proxy_ignore_client_abort on to achieve that. On 12/01/2017 19:23, Jonathan Geyser wrote: > Richard, > > On further investigation -- it looks like the client was closing the > front-end connection. I need the back-end socket to remain open > regardless of what the front-end is doing. Is there a way to > accomplish this? > > Thanks in advance, > Jonathan > > On Tue, Jan 10, 2017 at 2:26 PM, Richard Stanway > > wrote: > > The FIN ACK suggests that the other side is responsible for > closing the connection. If nginx was terminating the connection, > there would be no ACK bit set. Check your upstream server supports > keepalive. > > On Tue, Jan 10, 2017 at 10:55 PM, Jonathan Geyser > > wrote: > > Hi guys, > > I'm attempting to have multiple requests to a backend reuse > the same TCP session as to avoid handshaking for each > subsequent request. Nginx appears to send FIN ACK to the > backend after every request. > > Am I doing something wrong? > > Here is the current configuration: > https://paste.ngx.cc/6c24411681f24790 > > > > Thanks in advance, > Jonathan > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 13 00:30:23 2017 From: nginx-forum at forum.nginx.org (malloc813) Date: Thu, 12 Jan 2017 19:30:23 -0500 Subject: Set ssl_session_tickets each virtual host is unable? In-Reply-To: <20170112192020.GT47718@mdounin.ru> References: <20170112192020.GT47718@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, Jan 12, 2017 at 11:57:58AM -0500, malloc813 wrote: > > > Hi, I tested nginx configuration and got one problem. > > For example, I made 2 virtual hosts. They are SSL enabled server. > > > > http > > { > > #host1 > > server > > { > > ... > > ssl_sesstion_tickets off; > > ... > > } > > > > #host2 > > { > > ... > > ssl_session_tickets on; > > ... > > } > > > > } > > > > Visit host1 after apply this configuration, chrome shows an error > > ERR_SSL_PROTOCOL_ERROR > > Works fine here. The ERR_SSL_PROTOCOL_ERROR is likely caused by > other problems in the configuration. First of all try "nginx -t" > to see if there are obvious errors in your config. > I saw similar case like this: https://community.letsencrypt.org/t/errors-from-browsers-with-ssl-session-tickets-off-nginx/18124 I will test this problem with other system. > > Is it impossible to set ssl_session_tickets differently each > virtual host? > > No. > > Session resumption happens in the context of the default server, > and it is not possible to have different session cache / session > tickets settings in virtual hosts. In the above configuration > session tickets will be off for both servers (assuming they are > listening on the same ip/port and the first one is the default). > That means, if I set ssl_session_cache and ssl_session_timeout both of default server and virtual host, nginx dismiss virtual host's configuration and use default server's configuration too? > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271971,271976#msg-271976 From nginx-forum at forum.nginx.org Fri Jan 13 09:51:26 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Fri, 13 Jan 2017 04:51:26 -0500 Subject: SSL Offloading in UDP load In-Reply-To: <323e4e45-98ac-1ae2-4102-875266e63d8b@nginx.com> References: <323e4e45-98ac-1ae2-4102-875266e63d8b@nginx.com> Message-ID: <3b76f5149ca39a2068de0e82e404ccfe.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim. I am looking for a scenario to load balance the LWM2M server (my backend servers would be LWM2M Servers). I am thinking of using the Nginx UDP loadbalancer for this. Now, if you look at the LW2M stack, it has DTLS over UDP. So, I was thinking if I could offload the DTLS traffic here. Any thoughts? Cheers, Santos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271957,271979#msg-271979 From nginx-forum at forum.nginx.org Fri Jan 13 09:55:04 2017 From: nginx-forum at forum.nginx.org (nginxsantos) Date: Fri, 13 Jan 2017 04:55:04 -0500 Subject: COAP Reverse Proxy Message-ID: <60d571e76a0c07e1f649750d3c956853.NginxMailingListEnglish@forum.nginx.org> Hi, Anyone has any information of using Nginx as a Reverse Proxy for COAP. Looks like Nginx does not support this. But, does any third party module support this? Thanks, Santos Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271980,271980#msg-271980 From mdounin at mdounin.ru Fri Jan 13 15:09:24 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Jan 2017 18:09:24 +0300 Subject: Set ssl_session_tickets each virtual host is unable? In-Reply-To: References: <20170112192020.GT47718@mdounin.ru> Message-ID: <20170113150924.GV47718@mdounin.ru> Hello! On Thu, Jan 12, 2017 at 07:30:23PM -0500, malloc813 wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Hello! > > > > On Thu, Jan 12, 2017 at 11:57:58AM -0500, malloc813 wrote: > > > > > Hi, I tested nginx configuration and got one problem. > > > For example, I made 2 virtual hosts. They are SSL enabled server. > > > > > > http > > > { > > > #host1 > > > server > > > { > > > ... > > > ssl_sesstion_tickets off; > > > ... > > > } > > > > > > #host2 > > > { > > > ... > > > ssl_session_tickets on; > > > ... > > > } > > > > > > } > > > > > > Visit host1 after apply this configuration, chrome shows an error > > > ERR_SSL_PROTOCOL_ERROR > > > > Works fine here. The ERR_SSL_PROTOCOL_ERROR is likely caused by > > other problems in the configuration. First of all try "nginx -t" > > to see if there are obvious errors in your config. > > > > I saw similar case like this: > https://community.letsencrypt.org/t/errors-from-browsers-with-ssl-session-tickets-off-nginx/18124 > I will test this problem with other system. Thanks, I was able to reproduce this. It happens in a situration reversed compared to the configuration you've proveded: if tickets are switchec off in a non-default server, and you try to connect to this non-default server. For example: server { listen 443 ssl; server_name one; ssl_session_tickets on; ... } server { listen 443 ssl; server_name two; ssl_session_tickets off; ... } It seems that OpenSSL (1.0.2j) tries to honor changed session ticket preference, but fails to do this properly: it does not sent SessionTicket extension, but still tries to send NewSessionTicket handshake message. This causes problems with some browsers. As of OpenSSL 1.1.0c it no longer tries to send NewSessionTicket handshake mesage in such situation. (Note thought that session tickets still won't work anywhere if disabled in the default server.) > > > Is it impossible to set ssl_session_tickets differently each > > virtual host? > > > > No. > > > > Session resumption happens in the context of the default server, > > and it is not possible to have different session cache / session > > tickets settings in virtual hosts. In the above configuration > > session tickets will be off for both servers (assuming they are > > listening on the same ip/port and the first one is the default). > > > > That means, if I set ssl_session_cache and ssl_session_timeout both of > default server and virtual host, nginx dismiss virtual host's configuration > and use default server's configuration too? Yes. Though this is not something nginx does, rather this is how session resumption is implemented in OpenSSL. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Jan 13 17:33:16 2017 From: nginx-forum at forum.nginx.org (bclod) Date: Fri, 13 Jan 2017 12:33:16 -0500 Subject: Weird proxy_ssl_protocol ordering Message-ID: Hello All, I found some strange behavior while troubleshooting a connectivity issue today. Below was the scenario. * Upstream Backend configured to allow TLSv1.1 and TLSv1.2 * Client (nginx) configured with proxy_ssl_protocols TLSv1 TLSv1.2 No matter the ordering of nginx proxy_ssl_protocols TLSv1 was always attempted first and the handshake would fail. Once I added TLSv1.1 it caused TLSv1.2 to be attempted first which would be successful to the Server. Is this a bug? I always assumed that nginx would default to highest supported protocol outbound; but it seems that "TLSv1 TLSv1.2" might introduce some sort of strange ordering issue. We're using openresty 1.11.2.1.1 which internally uses nginx 1.11.2. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271984,271984#msg-271984 From al-nginx at none.at Fri Jan 13 18:48:51 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 13 Jan 2017 18:48:51 +0000 Subject: COAP Reverse Proxy In-Reply-To: <60d571e76a0c07e1f649750d3c956853.NginxMailingListEnglish@forum.nginx.org> References: <60d571e76a0c07e1f649750d3c956853.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi. Am 13-01-2017 09:55, schrieb nginxsantos: > Hi, > > Anyone has any information of using Nginx as a Reverse Proxy for COAP. > Looks > like Nginx does not support this. But, does any third party module > support > this? With COAP you mean this? https://en.wikipedia.org/wiki/Constrained_Application_Protocol Maybe this list answer could help you? https://dev.eclipse.org/mhonarc/lists/leshan-dev/msg00566.html Best regards Aleks > Thanks, Santos > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,271980,271980#msg-271980 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Fri Jan 13 19:32:27 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 13 Jan 2017 22:32:27 +0300 Subject: Weird proxy_ssl_protocol ordering In-Reply-To: References: Message-ID: <20170113193227.GY47718@mdounin.ru> Hello! On Fri, Jan 13, 2017 at 12:33:16PM -0500, bclod wrote: > Hello All, > > I found some strange behavior while troubleshooting a connectivity issue > today. Below was the scenario. > > * Upstream Backend configured to allow TLSv1.1 and TLSv1.2 > * Client (nginx) configured with proxy_ssl_protocols TLSv1 TLSv1.2 > > No matter the ordering of nginx proxy_ssl_protocols TLSv1 was always > attempted first and the handshake would fail. Once I added TLSv1.1 it caused > TLSv1.2 to be attempted first which would be successful to the Server. > > Is this a bug? I always assumed that nginx would default to highest > supported protocol outbound; but it seems that "TLSv1 TLSv1.2" might > introduce some sort of strange ordering issue. Sort of. The same problem can be reproduced using openssl s_client this way: $ openssl s_client -no_tls1_1 -connect 127.0.0.1:443 The problem is that only _one_ protocol version can be sent in CLientHello during a handshake, and expected to be the maximum version supported by the client. Depending on the OpenSSL version you use, TLS 1.0 and TLS 1.2 (but no TLS 1.1) in your configuration means either: - TLS 1.2 in ClientHello (OpenSSL before 1.0.2); or - TLS 1.0 in ClientHello (OpenSSL 1.0.2+); Both options have their drawbacks. In the first case a backend which supports TLS 1.1 but not TLS 1.2 will see highest supported version TLS 1.2, and will respond with TLS 1.1. And this will fail, as TLS 1.1 is not allowed by your configuration. In the latter case a backend which supports TLS 1.2 but not TLS 1.0 will immediately fail as the version request is too low (this is what happens in your case). I personally think that the previous behaviour was much more logical and allowed users to configure whatever they want to. But the change was clearly intentional. Please complain to the OpenSSL team if you too think it was wrong. Note though, that making "holes" in the protocol versions supported by a client isn't generally a good idea, and is likely to cause troubles. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Jan 13 19:40:35 2017 From: nginx-forum at forum.nginx.org (bclod) Date: Fri, 13 Jan 2017 14:40:35 -0500 Subject: Weird proxy_ssl_protocol ordering In-Reply-To: <20170113193227.GY47718@mdounin.ru> References: <20170113193227.GY47718@mdounin.ru> Message-ID: <769bc263970a0f53742a86ed795824c2.NginxMailingListEnglish@forum.nginx.org> Maxim, Thanks for the detailed reply! In the organization I work for I see most legacy backends support TLSv1 or support both TLSv1.1/1.2. Since every backend that supports TLSv1.1 also supports TLSv1.2 (from my org so far) I thought I was doing a small favor by leaving TLSv1.1 out of the scope. I've changed the config to read "TLSv1.2 TLSv1.1 TLSv1" and will never ever look back. Thanks again!!! Brandon Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271984,271987#msg-271987 From maxim at nginx.com Fri Jan 13 19:46:45 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 13 Jan 2017 22:46:45 +0300 Subject: SSL Offloading in UDP load In-Reply-To: <3b76f5149ca39a2068de0e82e404ccfe.NginxMailingListEnglish@forum.nginx.org> References: <323e4e45-98ac-1ae2-4102-875266e63d8b@nginx.com> <3b76f5149ca39a2068de0e82e404ccfe.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 1/13/17 12:51 PM, nginxsantos wrote: > Thanks Maxim. > I am looking for a scenario to load balance the LWM2M server (my backend > servers would be LWM2M Servers). I am thinking of using the Nginx UDP > loadbalancer for this. Now, if you look at the LW2M stack, it has DTLS over > UDP. So, I was thinking if I could offload the DTLS traffic here. > > Any thoughts? > OK, thanks for sharing this. Indeed, we do have this item in the stream module roadmap. I wouldn't promise any ETA for this specific feature, still need to figure out the demand for it from the community. -- Maxim Konovalov From nginx-forum at forum.nginx.org Fri Jan 13 21:42:01 2017 From: nginx-forum at forum.nginx.org (lacibaci) Date: Fri, 13 Jan 2017 16:42:01 -0500 Subject: Basic authentication Message-ID: <502a9a87c6ff52b8b81a075d633f97d6.NginxMailingListEnglish@forum.nginx.org> I have a location that I would like to protect: location /private { satisfy any; allow 192.168.1.0/24; deny all; auth_basic "Protected"; auth_basic_user_file conf/htpasswd; } This works for /private /private/ and /private/somefile.html However, when I request (GET or POST) /private/foo.php it will execute without auth. How can I set it up so everything under /private is protected? Thanks, Lac Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271989,271989#msg-271989 From jim at ohlste.in Fri Jan 13 22:05:43 2017 From: jim at ohlste.in (Jim Ohlstein) Date: Fri, 13 Jan 2017 17:05:43 -0500 Subject: Basic authentication In-Reply-To: <502a9a87c6ff52b8b81a075d633f97d6.NginxMailingListEnglish@forum.nginx.org> References: <502a9a87c6ff52b8b81a075d633f97d6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <27a979a6-68f9-6eb2-64a1-d429ad078871@ohlste.in> Hello, On 01/13/2017 04:42 PM, lacibaci wrote: > I have a location that I would like to protect: > > location /private { > satisfy any; > > allow 192.168.1.0/24; > deny all; > > auth_basic "Protected"; > auth_basic_user_file conf/htpasswd; > } > > This works for /private /private/ and /private/somefile.html > > However, when I request (GET or POST) /private/foo.php it will execute > without auth. > > How can I set it up so everything under /private is protected? > The request is probably being handled by a different location (ie the one that handles PHP requests). See http://nginx.org/en/docs/http/ngx_http_core_module.html#location for the order in which location is determined. -- Jim Ohlstein From nginx-forum at forum.nginx.org Sat Jan 14 06:16:09 2017 From: nginx-forum at forum.nginx.org (lacibaci) Date: Sat, 14 Jan 2017 01:16:09 -0500 Subject: Basic authentication In-Reply-To: <27a979a6-68f9-6eb2-64a1-d429ad078871@ohlste.in> References: <27a979a6-68f9-6eb2-64a1-d429ad078871@ohlste.in> Message-ID: <9e2065590b52d5972f6784c63a14efcb.NginxMailingListEnglish@forum.nginx.org> Thanks, i found it just above. It looks like this: location ~* \.php { fastcgi_pass unix:/run/php-fpm/php56-fpm.sock; } I would like to keep existing behavior (no user/passwd needed) except when clients try to execute php in /private... directory. Something like this: location ~* /private*\.php { satisfy any; allow 192.168.1.0/24; deny all; auth_basic "Protected"; auth_basic_user_file conf/htpasswd; } BTW this is on a Synology NAS, there are about a dosen different config files so I want to ensure I don't break existing apps. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271989,271991#msg-271991 From nginx-forum at forum.nginx.org Mon Jan 16 07:00:46 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Mon, 16 Jan 2017 02:00:46 -0500 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <3ffc3c3ed52e80136ef96945bcb16f43.NginxMailingListEnglish@forum.nginx.org> References: <72d4ae08-54df-606a-c045-ff4b88dad3c6@kent.ac.uk> <3ffc3c3ed52e80136ef96945bcb16f43.NginxMailingListEnglish@forum.nginx.org> Message-ID: can someone please respond and suggest best way to manage cache in highly utilized cache dependent environment. As number of nginx requests increases and cache hit ratio becomes significant , the tmpfs crosses the max_size limit mentioned in the nginx.conf file and use full cache mounted . This in returns increases the server load by great extent. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271995#msg-271995 From sca at andreasschulze.de Mon Jan 16 07:26:55 2017 From: sca at andreasschulze.de (A. Schulze) Date: Mon, 16 Jan 2017 08:26:55 +0100 Subject: stream module on 100% cpu load In-Reply-To: References: <20170103142038.Horde.rAxhfHMCPMZ3D73rldqchQI@andreasschulze.de> Message-ID: <20170116082655.Horde.d5AmKDiP2h70r_iKZ6_5rcE@andreasschulze.de> Vladimir Homutov: > You may try the following patch: > > diff --git a/src/stream/ngx_stream_proxy_module.c > b/src/stream/ngx_stream_proxy_module.c > --- a/src/stream/ngx_stream_proxy_module.c > +++ b/src/stream/ngx_stream_proxy_module.c > @@ -1564,6 +1564,7 @@ ngx_stream_proxy_process(ngx_stream_sess > return; > } > > + src->read->ready = 0; > src->read->eof = 1; > n = 0; > } Hello Vladimir, I can confirm the patch fix the issue. Thanks! Andreas From iippolitov at nginx.com Mon Jan 16 09:41:56 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Mon, 16 Jan 2017 12:41:56 +0300 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0ba04a5c-ee07-125f-948c-16569116ac1e@nginx.com> Hello, Your cache have 200m space for keys. This is around 1.6M items, isn't it? How much files do you have in your cache? May we have a look at `df -i ` and `du -s /cache/123` output, please? On 06.01.2017 08:40, omkar_jadhav_20 wrote: > Hi, > > I am using nginx as webserver with nginx version: nginx/1.10.2. For faster > access we have mounted cache of nginx of different application on RAM.But > even after giving enough buffer of size , now and then cache is getting > filled , below are few details of files for your reference : > maximum size given in nginx conf file is 500G , while mouting we have given > 600G of space i.e. 100G of buffer.But still it is getting filled 100%. > > fstab entries : > tmpfs /cache/123 tmpfs defaults,size=600G > 0 0 > tmpfs /cache/456 tmpfs defaults,size=60G > 0 0 > tmpfs /cache/789 tmpfs defaults,size=110G > 0 0 > > cache getting filled , df output: > > tmpfs tmpfs 60G 17G 44G 28% > /cache/456 > tmpfs tmpfs 110G 323M 110G 1% > /cache/789 > tmpfs tmpfs 600G 600G 0 100% > /cache/123 > > nginx conf details : > > proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g > inactive=3d; > > server{ > listen 80; > server_name dvr.catchup.com; > location ~.*.m3u8 { > access_log /var/log/nginx/access_123.log access; > proxy_cache off; > root /xyz/123; > if (!-e $request_filename) { > #origin url will be used if content is not available on DS > proxy_pass http://10.10.10.1X; > } > } > location / { > access_log /var/log/nginx/access_123.log access; > proxy_cache_valid 3d; > proxy_cache a123; > root /xyz/123; > if (!-e $request_filename) { > #origin url will be used if content is not available on server > proxy_pass http://10.10.10.1X; > } > proxy_cache_key $proxy_host$uri; > } > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271842#msg-271842 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 16 12:10:41 2017 From: nginx-forum at forum.nginx.org (nicktgr15) Date: Mon, 16 Jan 2017 07:10:41 -0500 Subject: Nginx record length during disk IO In-Reply-To: <20170112154439.GR47718@mdounin.ru> References: <20170112154439.GR47718@mdounin.ru> Message-ID: <2438874af52b9970af15ed62f740d309.NginxMailingListEnglish@forum.nginx.org> Thanks for the useful information Maxim! We ended up using strace to monitor the system calls and it looks like that with our current setup (i.e. default buffer size) the record length is 65536 bytes. read(17, "\355\247=^\256\36\361\235~\356z"..., 65536) = 65536 write(18, "\355\247=^\256\36\361\235~\356z"..., 65536) = 65536 At the moment we are setting use_temp_path=on using tmpfs as our temporary location which triggers a failed rename system call and then the above read/write calls happen. rename("/var/lib/nginx/tmp/proxy/1/00/0000001", "/dev/shm/cache/a/f8/0c725bfea12c2f361c37") = -1 EXDEV (Invalid cross-device link) Do you think that having use_temp_path=on in combination with tmpfs has any advantages or we should set use_temp_path=off as suggested in the docs? Regards, Nik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271961,272003#msg-272003 From oscaretu at gmail.com Mon Jan 16 12:19:50 2017 From: oscaretu at gmail.com (oscaretu .) Date: Mon, 16 Jan 2017 13:19:50 +0100 Subject: Nginx record length during disk IO In-Reply-To: <2438874af52b9970af15ed62f740d309.NginxMailingListEnglish@forum.nginx.org> References: <20170112154439.GR47718@mdounin.ru> <2438874af52b9970af15ed62f740d309.NginxMailingListEnglish@forum.nginx.org> Message-ID: When a call to rename(2) returns -1 and in errno the value es EXDEV, it means the system file doesn't support the rename feature, so the application is supposed to be able to solve this creating a file in the new filesystem and deleting the old file. This is something that I read recently about aufs filesystem (Docker uses aufs by default) Kind regards, Oscar On Mon, Jan 16, 2017 at 1:10 PM, nicktgr15 wrote: > Thanks for the useful information Maxim! > > We ended up using strace to monitor the system calls and it looks like that > with our current setup (i.e. default buffer size) the record length is > 65536 > bytes. > > read(17, "\355\247=^\256\36\361\235~\356z"..., 65536) = 65536 > write(18, "\355\247=^\256\36\361\235~\356z"..., 65536) = 65536 > > At the moment we are setting use_temp_path=on using tmpfs as our temporary > location which triggers a failed rename system call and then the above > read/write calls happen. > > rename("/var/lib/nginx/tmp/proxy/1/00/0000001", > "/dev/shm/cache/a/f8/0c725bfea12c2f361c37") = -1 EXDEV (Invalid > cross-device > link) > > Do you think that having use_temp_path=on in combination with tmpfs has any > advantages or we should set use_temp_path=off as suggested in the docs? > > Regards, > Nik > > Posted at Nginx Forum: https://forum.nginx.org/read. > php?2,271961,272003#msg-272003 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Oscar Fernandez Sierra oscaretu at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 16 12:35:40 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 16 Jan 2017 15:35:40 +0300 Subject: Nginx record length during disk IO In-Reply-To: <2438874af52b9970af15ed62f740d309.NginxMailingListEnglish@forum.nginx.org> References: <20170112154439.GR47718@mdounin.ru> <2438874af52b9970af15ed62f740d309.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170116123540.GA45866@mdounin.ru> Hello! On Mon, Jan 16, 2017 at 07:10:41AM -0500, nicktgr15 wrote: > Thanks for the useful information Maxim! > > We ended up using strace to monitor the system calls and it looks like that > with our current setup (i.e. default buffer size) the record length is 65536 > bytes. > > read(17, "\355\247=^\256\36\361\235~\356z"..., 65536) = 65536 > write(18, "\355\247=^\256\36\361\235~\356z"..., 65536) = 65536 > > At the moment we are setting use_temp_path=on using tmpfs as our temporary > location which triggers a failed rename system call and then the above > read/write calls happen. > > rename("/var/lib/nginx/tmp/proxy/1/00/0000001", > "/dev/shm/cache/a/f8/0c725bfea12c2f361c37") = -1 EXDEV (Invalid cross-device > link) This is exactly what proxy_cache_path documentation talks about (http://nginx.org/r/proxy_cache_path): : Starting from version 0.8.9, temporary files : and the cache can be put on different file systems. However, be : aware that in this case a file is copied across two file systems : instead of the cheap renaming operation. It is thus recommended : that for any given location both cache and a directory holding : temporary files are put on the same file system. Copying in question uses an internal 64k buffer. > Do you think that having use_temp_path=on in combination with tmpfs has any > advantages or we should set use_temp_path=off as suggested in the docs? Copying files across file systems is something usually better to avoid, even if one of the file systems is tmpfs. And almost always there are better ways to use memory then for temporary files on tmpfs. Note well that "use_temp_path=off" is mostly identical to "use_temp_path=on" and proxy_temp_path configured to use the same filesystem. It was introduced for complex configurations where different caches can be used, thus making it non-trivial to always configure proxy_temp_path on the same filesystem. -- Maxim Dounin http://nginx.org/ From mailinglist at unix-solution.de Mon Jan 16 13:31:49 2017 From: mailinglist at unix-solution.de (basti) Date: Mon, 16 Jan 2017 14:31:49 +0100 Subject: nginx ipv6 Message-ID: Hello, I have installed nginx (debian package) and try ipv6. Connection to http over ipv6 works. Connection to https over ipv6 get protocol error. nginx -V nginx version: nginx/1.9.10 built with OpenSSL 1.0.2j 26 Sep 2016 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fPIE -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt='-fPIE -pie -Wl,-z,relro -Wl,-z,now' --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_v2_module --with-http_sub_module --with-http_xslt_module --with-stream --with-stream_ssl_module --with-mail --with-mail_ssl_module --with-threads --add-module=/build/nginx-1.9.10/debian/modules/nginx-auth-pam --add-module=/build/nginx-1.9.10/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-1.9.10/debian/modules/nginx-echo --add-module=/build/nginx-1.9.10/debian/modules/nginx-upstream-fair --add-module=/build/nginx-1.9.10/debian/modules/ngx_http_substitutions_filter_module server { # listen 443 http2 reuseport; listen 443; ## listen for ipv4 listen [::]:443 ipv6only=on; ## listen for ipv6 server_name ssl.example.com; access_log /var/log/nginx/https.access.log; error_log /var/log/nginx/https.error.log warn; ssl on; ssl_certificate /var/lib/letsencrypt.sh/certs/ssl.example.com/fullchain.pem; ssl_certificate_key /var/lib/letsencrypt.sh/certs/ssl.example.com/privkey.pem; Error: SSL_ERROR_RX_RECORD_TOO_LONG [ipv6ip] - - [16/Jan/2017:14:25:58 +0100] "\x16\x03\x01\x00\xE2\x01\x00\x00\xDE\x03\x03\xD2m8" 400 173 "-" "-" Want's wrong here? Thanks for any help. Best regards, basti From alarig at swordarmor.fr Mon Jan 16 13:46:45 2017 From: alarig at swordarmor.fr (Alarig Le Lay) Date: Mon, 16 Jan 2017 14:46:45 +0100 Subject: nginx ipv6 In-Reply-To: References: Message-ID: <20170116134645.65ba2h73us7iaygf@kaiminus> Hi, On Mon Jan 16 14:31:49 2017, basti wrote: > server { > # listen 443 http2 reuseport; > listen 443; ## listen for ipv4 > listen [::]:443 ipv6only=on; ## listen for ipv6 > > server_name ssl.example.com; > > access_log /var/log/nginx/https.access.log; > error_log /var/log/nginx/https.error.log warn; > > ssl on; > ssl_certificate > /var/lib/letsencrypt.sh/certs/ssl.example.com/fullchain.pem; > ssl_certificate_key > /var/lib/letsencrypt.sh/certs/ssl.example.com/privkey.pem; > > Error: > > SSL_ERROR_RX_RECORD_TOO_LONG > [ipv6ip] - - [16/Jan/2017:14:25:58 +0100] > "\x16\x03\x01\x00\xE2\x01\x00\x00\xDE\x03\x03\xD2m8" 400 173 "-" "-" > > Want's wrong here? > Thanks for any help. You have to add the ssl keyword in your listen directive, like "listen 443 ssl; listen [::]:443 ssl;" -- alarig -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 473 bytes Desc: not available URL: From mailinglist at unix-solution.de Mon Jan 16 14:17:40 2017 From: mailinglist at unix-solution.de (basti) Date: Mon, 16 Jan 2017 15:17:40 +0100 Subject: nginx ipv6 In-Reply-To: <20170116134645.65ba2h73us7iaygf@kaiminus> References: <20170116134645.65ba2h73us7iaygf@kaiminus> Message-ID: <1ca1245c-cc69-b159-d99b-0e3b6bd639f3@unix-solution.de> Hello, there where an other problem, ssl_session_tickets where set to off. My config is some days old and this nedd nginx >1.5.9. Turn ssl_session_tickets on and all works fine. Best Regards, On 16.01.2017 14:46, Alarig Le Lay wrote: > Hi, > > On Mon Jan 16 14:31:49 2017, basti wrote: >> server { >> # listen 443 http2 reuseport; >> listen 443; ## listen for ipv4 >> listen [::]:443 ipv6only=on; ## listen for ipv6 >> >> server_name ssl.example.com; >> >> access_log /var/log/nginx/https.access.log; >> error_log /var/log/nginx/https.error.log warn; >> >> ssl on; >> ssl_certificate >> /var/lib/letsencrypt.sh/certs/ssl.example.com/fullchain.pem; >> ssl_certificate_key >> /var/lib/letsencrypt.sh/certs/ssl.example.com/privkey.pem; >> >> Error: >> >> SSL_ERROR_RX_RECORD_TOO_LONG >> [ipv6ip] - - [16/Jan/2017:14:25:58 +0100] >> "\x16\x03\x01\x00\xE2\x01\x00\x00\xDE\x03\x03\xD2m8" 400 173 "-" "-" >> >> Want's wrong here? >> Thanks for any help. > > You have to add the ssl keyword in your listen directive, like "listen > 443 ssl; listen [::]:443 ssl;" > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 819 bytes Desc: OpenPGP digital signature URL: From peter_booth at me.com Tue Jan 17 01:37:21 2017 From: peter_booth at me.com (Peter Booth) Date: Mon, 16 Jan 2017 20:37:21 -0500 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <0ba04a5c-ee07-125f-948c-16569116ac1e@nginx.com> References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> <0ba04a5c-ee07-125f-948c-16569116ac1e@nginx.com> Message-ID: <2F398FCE-924D-4D8D-AC78-C00383A7F1B2@me.com> I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache? When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. The min latency to get a packet from nginx through the kernel stack and onto the wire is about 4uS for a NIC of that vintage, dropping to 2uS with openonload (sokarflare's kernel bypass). As ippolitiv suggests, your cache already has room for 1.6M items- that's a huge amount. What kind of hit rate are you seeing for your cache? One way to manage cache size is to only cache popular items- if you set proxy_cache_min_uses =4 then only objects that are requested four times will be cached, which will increase your hit rates and reduce the space needed for the cache. Peter Sent from my iPhone > On Jan 16, 2017, at 4:41 AM, Igor A. Ippolitov wrote: > > Hello, > > Your cache have 200m space for keys. This is around 1.6M items, isn't it? > How much files do you have in your cache? May we have a look at > `df -i ` and `du -s /cache/123` output, please? > >> On 06.01.2017 08:40, omkar_jadhav_20 wrote: >> Hi, >> >> I am using nginx as webserver with nginx version: nginx/1.10.2. For faster >> access we have mounted cache of nginx of different application on RAM.But >> even after giving enough buffer of size , now and then cache is getting >> filled , below are few details of files for your reference : >> maximum size given in nginx conf file is 500G , while mouting we have given >> 600G of space i.e. 100G of buffer.But still it is getting filled 100%. >> >> fstab entries : >> tmpfs /cache/123 tmpfs defaults,size=600G >> 0 0 >> tmpfs /cache/456 tmpfs defaults,size=60G >> 0 0 >> tmpfs /cache/789 tmpfs defaults,size=110G >> 0 0 >> >> cache getting filled , df output: >> >> tmpfs tmpfs 60G 17G 44G 28% >> /cache/456 >> tmpfs tmpfs 110G 323M 110G 1% >> /cache/789 >> tmpfs tmpfs 600G 600G 0 100% >> /cache/123 >> >> nginx conf details : >> >> proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g >> inactive=3d; >> >> server{ >> listen 80; >> server_name dvr.catchup.com; >> location ~.*.m3u8 { >> access_log /var/log/nginx/access_123.log access; >> proxy_cache off; >> root /xyz/123; >> if (!-e $request_filename) { >> #origin url will be used if content is not available on DS >> proxy_pass http://10.10.10.1X; >> } >> } >> location / { >> access_log /var/log/nginx/access_123.log access; >> proxy_cache_valid 3d; >> proxy_cache a123; >> root /xyz/123; >> if (!-e $request_filename) { >> #origin url will be used if content is not available on server >> proxy_pass http://10.10.10.1X; >> } >> proxy_cache_key $proxy_host$uri; >> } >> } >> >> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271842#msg-271842 >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From steve at greengecko.co.nz Tue Jan 17 03:34:07 2017 From: steve at greengecko.co.nz (steve) Date: Tue, 17 Jan 2017 16:34:07 +1300 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <2F398FCE-924D-4D8D-AC78-C00383A7F1B2@me.com> References: <147309f3a8a04c645b863e86c0ef62f9.NginxMailingListEnglish@forum.nginx.org> <0ba04a5c-ee07-125f-948c-16569116ac1e@nginx.com> <2F398FCE-924D-4D8D-AC78-C00383A7F1B2@me.com> Message-ID: Hi, On 01/17/2017 02:37 PM, Peter Booth wrote: > I'm curious, why are you using tmpfs for your cache store? With fast local storage bring so cheap, why don't you devote a few TB to your cache? > > When I look at the techempower benchmarks I see that openresty (an nginx build that comes with lots of lua value add) can serve 440,000 JSON responses per sec with 3ms latency. That's on five year old E7-4850 Westmere hardware at 2.0GHz, with 10G NICs. The min latency to get a packet from nginx through the kernel stack and onto the wire is about 4uS for a NIC of that vintage, dropping to 2uS with openonload (sokarflare's kernel bypass). > > As ippolitiv suggests, your cache already has room for 1.6M items- that's a huge amount. What kind of hit rate are you seeing for your cache? > > One way to manage cache size is to only cache popular items- if you set proxy_cache_min_uses =4 then only objects that are requested four times will be cached, which will increase your hit rates and reduce the space needed for the cache. > > Peter > > Sent from my iPhone > >> On Jan 16, 2017, at 4:41 AM, Igor A. Ippolitov wrote: >> >> Hello, >> >> Your cache have 200m space for keys. This is around 1.6M items, isn't it? >> How much files do you have in your cache? May we have a look at >> `df -i ` and `du -s /cache/123` output, please? >> >>> On 06.01.2017 08:40, omkar_jadhav_20 wrote: >>> Hi, >>> >>> I am using nginx as webserver with nginx version: nginx/1.10.2. For faster >>> access we have mounted cache of nginx of different application on RAM.But >>> even after giving enough buffer of size , now and then cache is getting >>> filled , below are few details of files for your reference : >>> maximum size given in nginx conf file is 500G , while mouting we have given >>> 600G of space i.e. 100G of buffer.But still it is getting filled 100%. >>> >>> fstab entries : >>> tmpfs /cache/123 tmpfs defaults,size=600G >>> 0 0 >>> tmpfs /cache/456 tmpfs defaults,size=60G >>> 0 0 >>> tmpfs /cache/789 tmpfs defaults,size=110G >>> 0 0 >>> >>> cache getting filled , df output: >>> >>> tmpfs tmpfs 60G 17G 44G 28% >>> /cache/456 >>> tmpfs tmpfs 110G 323M 110G 1% >>> /cache/789 >>> tmpfs tmpfs 600G 600G 0 100% >>> /cache/123 >>> >>> nginx conf details : >>> >>> proxy_cache_path /cache/123 keys_zone=a123:200m levels=1:2 max_size=500g >>> inactive=3d; >>> >>> server{ >>> listen 80; >>> server_name dvr.catchup.com; >>> location ~.*.m3u8 { >>> access_log /var/log/nginx/access_123.log access; >>> proxy_cache off; >>> root /xyz/123; >>> if (!-e $request_filename) { >>> #origin url will be used if content is not available on DS >>> proxy_pass http://10.10.10.1X; >>> } >>> } >>> location / { >>> access_log /var/log/nginx/access_123.log access; >>> proxy_cache_valid 3d; >>> proxy_cache a123; >>> root /xyz/123; >>> if (!-e $request_filename) { >>> #origin url will be used if content is not available on server >>> proxy_pass http://10.10.10.1X; >>> } >>> proxy_cache_key $proxy_host$uri; >>> } >>> } >>> >>> Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,271842#msg-271842 >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx So that's a total of 1TB of memory allocated to caches. Do you have that much spare on your server? Linux will allocate *up to* the specified amount *as long as it's spare*. It would be worth looking at your server to ensure that 1TB memory is spare before blaming nginx. You can further improve performance and safety by mounting them nodev,noexec,nosuid,noatime,async,size=xxxM,mode=0755,uid=xx,gid=xx To answer this poster... memory is even faster! Steve -- Steve Holdoway BSc(Hons) MIITP http://www.greengecko.co.nz Linkedin: http://www.linkedin.com/in/steveholdoway Skype: sholdowa From nginx-forum at forum.nginx.org Tue Jan 17 16:48:51 2017 From: nginx-forum at forum.nginx.org (mastercan) Date: Tue, 17 Jan 2017 11:48:51 -0500 Subject: $request_method variable shows wrong value Message-ID: <9062569e2e5ee415880540de10ce3a30.NginxMailingListEnglish@forum.nginx.org> I have a setup where I'm using an Accel-Redirect header in php like this: header("X-Accel-Redirect: /testxxx.php?".$_SERVER['QUERY_STRING']); Furthermore I'm using HTTP/2.0 and SSL, running on nginx 1.11.8. The problem is: When doing a POST request on my upload.php (which then does an x-accel-redirect to testxxx.php) the $request_method has the value "GET". I have a section in my nginx php config where I use "add_header" to output the value of $request_method. So that's how I know this value is set to "GET". Some weeks ago this was working correct. All I changed since then was switching to HTTPS. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272029,272029#msg-272029 From nginx-forum at forum.nginx.org Tue Jan 17 22:24:43 2017 From: nginx-forum at forum.nginx.org (vegetax) Date: Tue, 17 Jan 2017 17:24:43 -0500 Subject: proxy_bind Message-ID: Hi, I am getting "invalid number of arguments" every time I add proxy_bind $remote_addr transparent to my configs below can someone help and let me know what I am missing. Thx stream { upstream splunk_backend { server 10.10.10.31:514; server 10.10.10.32:514; } server { listen 10.10.10.43:514; proxy_bind $remote_addr transparent; listen 514 udp; proxy_connect_timeout 60s; proxy_timeout 5m; proxy_pass splunk_backend; proxy_buffer_size 2048k; proxy_next_upstream_timeout 0; error_log /var/log/nginx/splunk.log info; } nginx -t nginx: [emerg] invalid number of arguments in "proxy_bind" directive in /etc/nginx/nginx.conf:47 nginx: configuration file /etc/nginx/nginx.conf test failed Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272031,272031#msg-272031 From francis at daoine.org Tue Jan 17 22:58:18 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 17 Jan 2017 22:58:18 +0000 Subject: proxy_bind In-Reply-To: References: Message-ID: <20170117225818.GS2958@daoine.org> On Tue, Jan 17, 2017 at 05:24:43PM -0500, vegetax wrote: Hi there, > Hi, I am getting "invalid number of arguments" every time I add proxy_bind > $remote_addr transparent to my configs below can someone help and let me > know what I am missing. Thx My guess: compare the content at http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_bind with the output of your "nginx -v" f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 17 23:29:34 2017 From: nginx-forum at forum.nginx.org (vegetax) Date: Tue, 17 Jan 2017 18:29:34 -0500 Subject: proxy_bind In-Reply-To: <20170117225818.GS2958@daoine.org> References: <20170117225818.GS2958@daoine.org> Message-ID: <93c9f8612c4652b36c63ccf74a0386be.NginxMailingListEnglish@forum.nginx.org> so following the link you have posted my version does not support proxy_bind? # My Version nginx version: nginx/1.10.2 syntax: proxy_bind address [transparent] | off; Default: ? Context: stream, server This directive appeared in version 1.9.2. Makes outgoing connections to a proxied server originate from the specified local IP address. Parameter value can contain variables (1.11.2). The special value off cancels the effect of the proxy_bind directive inherited from the previous configuration level, which allows the system to auto-assign the local IP address. The transparent parameter (1.11.0) allows outgoing connections to a proxied server originate from a non-local IP address, for example, from a real IP address of a client: proxy_bind $remote_addr transparent; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272031,272033#msg-272033 From francis at daoine.org Tue Jan 17 23:55:43 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 17 Jan 2017 23:55:43 +0000 Subject: proxy_bind In-Reply-To: <93c9f8612c4652b36c63ccf74a0386be.NginxMailingListEnglish@forum.nginx.org> References: <20170117225818.GS2958@daoine.org> <93c9f8612c4652b36c63ccf74a0386be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170117235543.GT2958@daoine.org> On Tue, Jan 17, 2017 at 06:29:34PM -0500, vegetax wrote: Hi there, > so following the link you have posted my version does not support > proxy_bind? Incorrect. The documentation says """ > This directive appeared in version 1.9.2. """ It also says """ > The transparent parameter (1.11.0) """ so you'll want at least that version if you want to use "transparent". And there is another version requirement if you want to use a $variable in the parameter. > # My Version > nginx version: nginx/1.10.2 Your version does not support using "transparent", or using "$remote_addr", but it does support proxy_bind. If you want to use the config you showed, you will need an updated nginx version. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jan 18 07:21:12 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 18 Jan 2017 02:21:12 -0500 Subject: cache manager of nginx not clearing cache as expected Message-ID: <82983d22aa0407fc072b29796b7f8cc4.NginxMailingListEnglish@forum.nginx.org> Hi, we are running nginx with version 1.10 on redhat 7.2 OS. We have set max_seize limit for cache of particular application. But we have observed that cache manager is clearing the space from cache but not at desired speed . below are few details : Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_cache-lv_cache ext4 1.5T 1.4T 6.9M 100% /cache ------------------------------------ du -sh /cache/12007 243G 12007 -------------------- nginx conf corresponding line : proxy_cache_path /cache/12007 keys_zone=a12007:200m levels=1:2 max_size=200g inactive=10d; ---------------- In error.log we are getting continuous no space left on the device errors. Please advise what needs to be done to avoid these types of scenario. We are getting very frequent error of such types in out environment. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272036,272036#msg-272036 From iippolitov at nginx.com Wed Jan 18 09:06:29 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 18 Jan 2017 12:06:29 +0300 Subject: cache manager of nginx not clearing cache as expected In-Reply-To: <82983d22aa0407fc072b29796b7f8cc4.NginxMailingListEnglish@forum.nginx.org> References: <82983d22aa0407fc072b29796b7f8cc4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9d44d59b-d7d8-62b7-192d-b14ed747ebbb@nginx.com> Hello, omkar_jadhav_20 It might occur that nginx keeps files opened while serving them to slow clients. You can try looking for these files with 'lsof -n /cache | grep deleted'. In a case like this you might want to set a timeout for clients or use smaller files/chunks. It looks like nginx cache manager maintains cache size around 200g as requested (243G in your case) You have to ensure this is exactly cache directory which occupy whole FS. On 18.01.2017 10:21, omkar_jadhav_20 wrote: > Hi, > > we are running nginx with version 1.10 on redhat 7.2 OS. We have set > max_seize limit for cache of particular application. But we have observed > that cache manager is clearing the space from cache but not at desired speed > . below are few details : > Filesystem Type Size Used Avail Use% Mounted on > > /dev/mapper/vg_cache-lv_cache ext4 1.5T 1.4T 6.9M 100% /cache > > ------------------------------------ > du -sh /cache/12007 > 243G 12007 > -------------------- > nginx conf corresponding line : > proxy_cache_path /cache/12007 keys_zone=a12007:200m levels=1:2 > max_size=200g inactive=10d; > > ---------------- > > In error.log we are getting continuous no space left on the device errors. > Please advise what needs to be done to avoid these types of scenario. We are > getting very frequent error of such types in out environment. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272036,272036#msg-272036 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 18 10:44:14 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 18 Jan 2017 05:44:14 -0500 Subject: cache manager of nginx not clearing cache as expected In-Reply-To: <9d44d59b-d7d8-62b7-192d-b14ed747ebbb@nginx.com> References: <9d44d59b-d7d8-62b7-192d-b14ed747ebbb@nginx.com> Message-ID: I can not see any open files once i fire the command on affected servers : lsof -n /cache | grep deleted Also my question is why nginx is not limiting use of cache directory to its max_size , like in this case I have mentioned max_size for cache directory as 200G but in actual particular cache directory is crossing 200G of mark i.e going up by 243 G. Could you please exaplain why this happens and what needs to be done to avoid such FS full scenarios which causes all requests to go upstream due to lack of free space in FS. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272036,272038#msg-272038 From mdounin at mdounin.ru Wed Jan 18 12:32:04 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 Jan 2017 15:32:04 +0300 Subject: $request_method variable shows wrong value In-Reply-To: <9062569e2e5ee415880540de10ce3a30.NginxMailingListEnglish@forum.nginx.org> References: <9062569e2e5ee415880540de10ce3a30.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170118123204.GG45866@mdounin.ru> Hello! On Tue, Jan 17, 2017 at 11:48:51AM -0500, mastercan wrote: > I have a setup where I'm using an Accel-Redirect header in php like this: > header("X-Accel-Redirect: /testxxx.php?".$_SERVER['QUERY_STRING']); > > Furthermore I'm using HTTP/2.0 and SSL, running on nginx 1.11.8. > > The problem is: When doing a POST request on my upload.php (which then does > an x-accel-redirect to testxxx.php) the $request_method has the value > "GET". That's correct. After a redirection via X-Accel-Redirect to an URI, the request method is changed to GET, much like it happens after error_page redirections (see http://nginx.org/r/error_page). If you want to preserve the original request method, use a redirection to a named location. > I have a section in my nginx php config where I use "add_header" to output > the value of $request_method. So that's how I know this value is set to > "GET". > > Some weeks ago this was working correct. All I changed since then was > switching to HTTPS. This might work previously with old nginx versions, as previously there was a bug which preserved original request method string representation after X-Accel-Redirect. The bug was fixed in nginx 1.9.10: *) Bugfix: proxying used the HTTP method of the original request after an "X-Accel-Redirect" redirection. This also affects the $request_method variable value you use. -- Maxim Dounin http://nginx.org/ From iippolitov at nginx.com Wed Jan 18 19:17:31 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Wed, 18 Jan 2017 22:17:31 +0300 Subject: cache manager of nginx not clearing cache as expected In-Reply-To: References: <9d44d59b-d7d8-62b7-192d-b14ed747ebbb@nginx.com> Message-ID: <292c93c0-a449-af98-f465-df2f148b08a9@nginx.com> max_size is not a strict limit. It's just another watermark for cache manager to start deleting files. Also, there might be a difference between du and cache manager used space estimation (https://github.com/nginx/nginx/blob/f8a9d528df92c7634088e575e5c3d63a1d4ab8ea/src/os/unix/ngx_files.h#L188) If your cache gains 25% excess in between of cache manager invocations, it looks like there is too much data to cache. You might want to split cache across several nginx instances with an additional nginx balancer in front of them. You may also try upgrading nginx to the latest version where cache manager parameters were added (http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_path , manager_files, manager_threshold, and manager_sleep). Using those parameters and knowing your traffic pattern you can make a better estimation of required cache size. You can get newest nginx version from nginx repository: http://nginx.org/en/linux_packages.html On 18.01.2017 13:44, omkar_jadhav_20 wrote: > I can not see any open files once i fire the command on affected servers : > lsof -n /cache | grep deleted > Also my question is why nginx is not limiting use of cache directory to its > max_size , like in this case I have mentioned max_size for cache directory > as 200G but in actual particular cache directory is crossing 200G of mark > i.e going up by 243 G. > Could you please exaplain why this happens and what needs to be done to > avoid such FS full scenarios which causes all requests to go upstream due to > lack of free space in FS. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272036,272038#msg-272038 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Wed Jan 18 19:18:12 2017 From: nginx-forum at forum.nginx.org (vegetax) Date: Wed, 18 Jan 2017 14:18:12 -0500 Subject: proxy_bind In-Reply-To: <20170117235543.GT2958@daoine.org> References: <20170117235543.GT2958@daoine.org> Message-ID: <02b9608a923a23b9e312226df4adcc30.NginxMailingListEnglish@forum.nginx.org> I am running centos 6.8 all the repo's I am trying from centos to epel just have version 1.10.2 for my OS I think I would need to use Centos 7 to use 1.11.0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272031,272042#msg-272042 From nginx-forum at forum.nginx.org Wed Jan 18 19:44:50 2017 From: nginx-forum at forum.nginx.org (Jim Ohlstein) Date: Wed, 18 Jan 2017 14:44:50 -0500 Subject: proxy_bind In-Reply-To: <02b9608a923a23b9e312226df4adcc30.NginxMailingListEnglish@forum.nginx.org> References: <20170117235543.GT2958@daoine.org> <02b9608a923a23b9e312226df4adcc30.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3da485ef2e0a8a13b7f675d18a651717.NginxMailingListEnglish@forum.nginx.org> http://nginx.org/en/linux_packages.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272031,272043#msg-272043 From daniel at linux-nerd.de Wed Jan 18 20:52:01 2017 From: daniel at linux-nerd.de (Daniel) Date: Wed, 18 Jan 2017 21:52:01 +0100 Subject: Wildcard docroot? Message-ID: <012E9AEA-7591-4197-875F-7BC32873E66A@linux-nerd.de> Hi there, i wanted to try something like a Wildcard DocRoot: server { listen 80; root /var/www/branches/*/current/web/; server_name auto.deploy.fcse.int; The Setup looks like this: /var/www/branches/develop/current/web/ /var/www/branches/master/current/web/ /var/www/branches/feature1/current/web/ /var/www/branches/feature2/current/web/ I wanted now to open the URL like this: auto.deploy.fcse.int/master/ or /develop and so on. The Problem is that all ?projects? are Symfony projects so current/web must always be set :-/ I hope you guys understand what mean ;) Cheers Daniel -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 18 22:42:04 2017 From: nginx-forum at forum.nginx.org (powellchristoph) Date: Wed, 18 Jan 2017 17:42:04 -0500 Subject: Rate limiting and SMTP proxy Message-ID: <8821a3f50d8333efb2295fc2cb4dd131.NginxMailingListEnglish@forum.nginx.org> Hello, I was wondering if someone could clarify if the 'ngx_http_limit_req_module' would rate limit an smtp proxy. The rate-limiting module says that you can use the 'limit_req' within a server context. Would I be able to use it within the server context of a mail block? But the 'limit_req_zone' can only be declared within an http context. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272046,272046#msg-272046 From nginx-forum at forum.nginx.org Wed Jan 18 23:42:52 2017 From: nginx-forum at forum.nginx.org (tuachotu) Date: Wed, 18 Jan 2017 18:42:52 -0500 Subject: Worker process monitoring Message-ID: <64e296d48e86e1e31b71a506c018b161.NginxMailingListEnglish@forum.nginx.org> Hello, I need some help on monitoring worker process. What is the recommended way of monitoring.. 1. Worker process restarts (to build an alert for frequent restart) 2. Count of worker process (nginx is started with "auto" as worker process count) TIA, Vikrant Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272047,272047#msg-272047 From tjlp at sina.com Thu Jan 19 01:02:34 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Thu, 19 Jan 2017 09:02:34 +0800 Subject: why "connection: close" header is added when the request is passed to upstream server? Message-ID: <20170119010234.EADC510200E3@webmail.sinamail.sina.com.cn> Hi, Nginx guy, I use Nginx in the Kubernetes. With the upstream server log, I find that the header "connection: close" is added when the request is passed to upstream server. Why? What I hope is the original header relating to connection status should be passed to upstream server without any change. That means: if the original request header has no connection header, "connection" header should not be added. The final expected behavior is: if no connection header in the request header, the session should be keep-alive, if there is "connection: close" header, the session should be terminated after the response is returned. Thanks Liu Peng -------------- next part -------------- An HTML attachment was scrubbed... URL: From aapo.talvensaari at gmail.com Thu Jan 19 01:11:21 2017 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Thu, 19 Jan 2017 03:11:21 +0200 Subject: Wildcard docroot? In-Reply-To: <012E9AEA-7591-4197-875F-7BC32873E66A@linux-nerd.de> References: <012E9AEA-7591-4197-875F-7BC32873E66A@linux-nerd.de> Message-ID: On Wed, Jan 18, 2017 at 10:52 PM, Daniel wrote: > Hi there, > > i wanted to try something like a Wildcard DocRoot: > > > server { > listen 80; > root /var/www/branches/*/current/web/; > server_name auto.deploy.fcse.int; > I use this: server { listen [::1]:80 default_server deferred reuseport so_keepalive=on ipv6only=on; listen 127.0.0.1:80 default_server deferred reuseport so_keepalive=on; server_name ~(?[^\.]+)\.dev$; root /Users/bungle/Sites/$site/html; index index.php; # ... } I'm sure you can figure out from there. Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Thu Jan 19 09:04:46 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 19 Jan 2017 10:04:46 +0100 Subject: ssl_protocols & SNI Message-ID: Hello, I tried to overload the value of my default ssl_protocols (http block level) in a server block. It did not seem to apply the other value in this virtuel server only. Since I use SNI on my OpenSSL implementation, which perfectly works to support multiple virtual servers, I wonder why this SNI capability isn't leveraged to apply different TLS environment depending on the SNI value and the TLS directives configured for the virtual server of the asked domain. Can SNI be used for other TLS configuration directives other than certificates? More generally, is it normal you cannot overload directives such as ssl_protocols or ssl_ciphers in a specific virtual server, using the same socket as others? If positive, would it be possible to use SNI to tweak TLS connections envrionment depending on domain? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 19 10:33:21 2017 From: nginx-forum at forum.nginx.org (forumacct) Date: Thu, 19 Jan 2017 05:33:21 -0500 Subject: Use proxy_pass to forward traffic to owncloud server Message-ID: <288531e1387566dce9642f061ef0ef5e.NginxMailingListEnglish@forum.nginx.org> Hello All, I have 2 Raspberry Pi both with nginx. RPI#1 is plain website (using http) (listening on port 8000) (local IP 192.168.11.170)(nginx : 1.2.1-2.2 ) RPI#2 is an owncloud server (using https) (local IP 192.168.11.176)(1.6.2-5+deb8u4) My dyndns domain name gets routed to RPI#1. nginx on RPI#1 uses following server & proxy_pass stanza: server { listen 8000; ## listen for ipv4; this line is default and implied listen 80; listen 443; #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /media/usbstick/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name rpi1; ... other location stanzas ... location /owncloud { proxy_pass http://192.168.11.176:80; } If I put a minimum http config file on RPI#2 it works. upstream php-handler { server 127.0.0.1:9000; } server { listen 80; root /media/usbstick/nginx/www/owncloud; server_name rpi3; location / { try_files $uri $uri/ /index.html; ssi on; } } With following index.html file. Under Construction

This page is under construction. Please come back soon!

When entering 192.168.11.170/owncloud I get the 'under construction' message. That is confirmed from local LAN or from WAN when using my dyndns domain. When I'm using the nginx config provided by owncloud tutorial: https://normally.online/2016/04/29/owncloud-9-0-1-on-raspberry-pi-3-step-by-step/ This one puts all traffic on https. ... upstream php-handler { server 127.0.0.1:9000; #server unix:/var/run/php5-fpm.sock; } server { listen 80; server_name xxxxxxx.dyndns.ws; return 301 https://$server_name$request_uri; # enforce https } server { listen 443 ssl; server_name xxxxxxx.dyndns.ws; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; ... more stuff ... I get this error (firefox). An error occurred during a connection to xxxxxx.dyndns.ws. SSL received a record that exceeded the maximum permissible length. (Error code: ssl_error_rx_record_too_long) What's the proper setup of getting the https traffic to work? Thanks, Gert Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272050,272050#msg-272050 From mdounin at mdounin.ru Thu Jan 19 13:12:37 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jan 2017 16:12:37 +0300 Subject: Rate limiting and SMTP proxy In-Reply-To: <8821a3f50d8333efb2295fc2cb4dd131.NginxMailingListEnglish@forum.nginx.org> References: <8821a3f50d8333efb2295fc2cb4dd131.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170119131237.GI45866@mdounin.ru> Hello! On Wed, Jan 18, 2017 at 05:42:04PM -0500, powellchristoph wrote: > Hello, > > I was wondering if someone could clarify if the 'ngx_http_limit_req_module' > would rate limit an smtp proxy. The rate-limiting module says that you can > use the 'limit_req' within a server context. Would I be able to use it > within the server context of a mail block? But the 'limit_req_zone' can only > be declared within an http context. The ngx_http_limit_req_module is a HTTP module, you can't use it in the mail module. On the other hand, you can use it to limit requests to auth_http script, providing essentially the same functionality. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 19 13:23:23 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jan 2017 16:23:23 +0300 Subject: why "connection: close" header is added when the request is passed to upstream server? In-Reply-To: <20170119010234.EADC510200E3@webmail.sinamail.sina.com.cn> References: <20170119010234.EADC510200E3@webmail.sinamail.sina.com.cn> Message-ID: <20170119132323.GJ45866@mdounin.ru> Hello! On Thu, Jan 19, 2017 at 09:02:34AM +0800, tjlp at sina.com wrote: > I use Nginx in the Kubernetes. With the upstream server log, I > find that the header "connection: close" is added when the > request is passed to upstream server. Why? Because the connection between nginx and the upstream server is a separate connection, and by default nginx isn't going to keep it alive. Keepalive connections to upstream servers can be configured as documented here: http://nginx.org/r/keepalive > What I hope is the original header relating to connection status > should be passed to upstream server without any change. That > means: if the original request header has no connection header, > "connection" header should not be added. The connection between nginx and the backend server is a completely separate connection, and it is expected to have it's own hop-by-hop headers, see RFC 2616 here: https://tools.ietf.org/html/rfc2616#section-13.5.1 -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jan 19 13:36:55 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 19 Jan 2017 16:36:55 +0300 Subject: ssl_protocols & SNI In-Reply-To: References: Message-ID: <20170119133655.GK45866@mdounin.ru> Hello! On Thu, Jan 19, 2017 at 10:04:46AM +0100, B.R. via nginx wrote: > Hello, > > I tried to overload the value of my default ssl_protocols (http block > level) in a server block. > It did not seem to apply the other value in this virtuel server only. > > Since I use SNI on my OpenSSL implementation, which perfectly works to > support multiple virtual servers, I wonder why this SNI capability isn't > leveraged to apply different TLS environment depending on the SNI value and > the TLS directives configured for the virtual server of the asked domain. > Can SNI be used for other TLS configuration directives other than > certificates? > > More generally, is it normal you cannot overload directives such as > ssl_protocols or ssl_ciphers in a specific virtual server, using the same > socket as others? > If positive, would it be possible to use SNI to tweak TLS connections > envrionment depending on domain? You can overload ssl_ciphers. You can't overload ssl_protocols because OpenSSL works this way: it selects the protocol used before SNI callback (and this behaviour looks more or less natural beacause the existance of SNI depends on the protocol used, and, for example, you can't enable SSLv3 in a SNI-based virtual host). In general, whether or not some SSL feature can be tweaked for SNI-based virtual hosts depends on two factors: - if it's at all possible; - how OpenSSL handles it. In some cases nginx also tries to provide per-virtualhost support even for things OpenSSL doesn't handle natively, e.g., ssl_verify, ssl_verify_depth, ssl_prefer_server_ciphers. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Thu Jan 19 14:28:11 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 19 Jan 2017 15:28:11 +0100 Subject: ssl_protocols & SNI In-Reply-To: <20170119133655.GK45866@mdounin.ru> References: <20170119133655.GK45866@mdounin.ru> Message-ID: I acknowledge how that works, although OpenSSL providing more flexibility over SNI for protocols supporting it would have been appreciated. Too bad. Thanks Maxim for you always concise and straightforward discerning answers! --- *B. R.* On Thu, Jan 19, 2017 at 2:36 PM, Maxim Dounin wrote: > Hello! > > On Thu, Jan 19, 2017 at 10:04:46AM +0100, B.R. via nginx wrote: > > > Hello, > > > > I tried to overload the value of my default ssl_protocols (http block > > level) in a server block. > > It did not seem to apply the other value in this virtuel server only. > > > > Since I use SNI on my OpenSSL implementation, which perfectly works to > > support multiple virtual servers, I wonder why this SNI capability isn't > > leveraged to apply different TLS environment depending on the SNI value > and > > the TLS directives configured for the virtual server of the asked domain. > > Can SNI be used for other TLS configuration directives other than > > certificates? > > > > More generally, is it normal you cannot overload directives such as > > ssl_protocols or ssl_ciphers in a specific virtual server, using the same > > socket as others? > > If positive, would it be possible to use SNI to tweak TLS connections > > envrionment depending on domain? > > You can overload ssl_ciphers. You can't overload ssl_protocols > because OpenSSL works this way: it selects the protocol used > before SNI callback (and this behaviour looks more or less natural > beacause the existance of SNI depends on the protocol used, and, > for example, you can't enable SSLv3 in a SNI-based virtual host). > > In general, whether or not some SSL feature can be tweaked for > SNI-based virtual hosts depends on two factors: > > - if it's at all possible; > - how OpenSSL handles it. > > In some cases nginx also tries to provide per-virtualhost support > even for things OpenSSL doesn't handle natively, e.g., ssl_verify, > ssl_verify_depth, ssl_prefer_server_ciphers. > > -- > Maxim Dounin > http://nginx.org/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 19 14:53:46 2017 From: nginx-forum at forum.nginx.org (powellchristoph) Date: Thu, 19 Jan 2017 09:53:46 -0500 Subject: Rate limiting and SMTP proxy In-Reply-To: <20170119131237.GI45866@mdounin.ru> References: <20170119131237.GI45866@mdounin.ru> Message-ID: That makes perfect sense. Thank you for the help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272046,272062#msg-272062 From nmilas at noa.gr Thu Jan 19 16:49:01 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Thu, 19 Jan 2017 18:49:01 +0200 Subject: How to log internal location evaluation Message-ID: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> Hello, I am running nginx 1.10.2 on CentOS 6. I am trying to configure a new (virtual) website and I am having problems. I would like to be able to log details of the evaluation of URIs in location blocks by nginx. For example, I would like to see in a log: * which location block (actually which part of the configuration) was used to evaluate a URI * if an alias is used in a location block, what was the path that was calculated using the alias * in rewrite scenarios, how was a URI rewritten The above info would help me understand where I am in error. Can I get such info by using some config settings? Please let me know! Thanks in advance, Nick From reallfqq-nginx at yahoo.fr Thu Jan 19 18:07:02 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Thu, 19 Jan 2017 19:07:02 +0100 Subject: ssl_protocols & SNI In-Reply-To: References: <20170119133655.GK45866@mdounin.ru> Message-ID: There is something strange, though. I configured cipher suites with ssl_ciphers with suites from TLSv1.0 & TLSv1.2 (TLSv1.1 having no specific cipher suites but merely relying on thos from TLSv1.0). Those 3 protocols can be tested successfully when ssl_protocols is at its default value (TLSv1 TLSv1.1 TLSv1.2 since nginx v1.9.1). However, trying to remove TLSv1 (thus using TLSv1.1 TLSv1.2 for those who are following ^^), I cannot connect using neither TLSv1.0 nor TLSv1.1, only with TLSv1.2 a connection can be established. I am probably overlooking something... What is it? --- *B. R.* On Thu, Jan 19, 2017 at 3:28 PM, B.R. wrote: > I acknowledge how that works, although OpenSSL providing more flexibility > over SNI for protocols supporting it would have been appreciated. Too bad. > Thanks Maxim for you always concise and straightforward discerning answers! > --- > *B. R.* > > On Thu, Jan 19, 2017 at 2:36 PM, Maxim Dounin wrote: > >> Hello! >> >> On Thu, Jan 19, 2017 at 10:04:46AM +0100, B.R. via nginx wrote: >> >> > Hello, >> > >> > I tried to overload the value of my default ssl_protocols (http block >> > level) in a server block. >> > It did not seem to apply the other value in this virtuel server only. >> > >> > Since I use SNI on my OpenSSL implementation, which perfectly works to >> > support multiple virtual servers, I wonder why this SNI capability isn't >> > leveraged to apply different TLS environment depending on the SNI value >> and >> > the TLS directives configured for the virtual server of the asked >> domain. >> > Can SNI be used for other TLS configuration directives other than >> > certificates? >> > >> > More generally, is it normal you cannot overload directives such as >> > ssl_protocols or ssl_ciphers in a specific virtual server, using the >> same >> > socket as others? >> > If positive, would it be possible to use SNI to tweak TLS connections >> > envrionment depending on domain? >> >> You can overload ssl_ciphers. You can't overload ssl_protocols >> because OpenSSL works this way: it selects the protocol used >> before SNI callback (and this behaviour looks more or less natural >> beacause the existance of SNI depends on the protocol used, and, >> for example, you can't enable SSLv3 in a SNI-based virtual host). >> >> In general, whether or not some SSL feature can be tweaked for >> SNI-based virtual hosts depends on two factors: >> >> - if it's at all possible; >> - how OpenSSL handles it. >> >> In some cases nginx also tries to provide per-virtualhost support >> even for things OpenSSL doesn't handle natively, e.g., ssl_verify, >> ssl_verify_depth, ssl_prefer_server_ciphers. >> >> -- >> Maxim Dounin >> http://nginx.org/ >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Thu Jan 19 20:27:49 2017 From: peter_booth at me.com (Peter Booth) Date: Thu, 19 Jan 2017 15:27:49 -0500 Subject: How to log internal location evaluation In-Reply-To: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> References: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> Message-ID: You can get all that and a lot, lot more if you build a debug enabled version of nginx Sent from my iPhone > On Jan 19, 2017, at 11:49 AM, Nikolaos Milas wrote: > > Hello, > > I am running nginx 1.10.2 on CentOS 6. > > I am trying to configure a new (virtual) website and I am having problems. I would like to be able to log details of the evaluation of URIs in location blocks by nginx. > > For example, I would like to see in a log: > > * which location block (actually which part of the configuration) was > used to evaluate a URI > * if an alias is used in a location block, what was the path that was > calculated using the alias > * in rewrite scenarios, how was a URI rewritten > > The above info would help me understand where I am in error. > > Can I get such info by using some config settings? Please let me know! > > Thanks in advance, > Nick > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From tjlp at sina.com Fri Jan 20 01:00:57 2017 From: tjlp at sina.com (tjlp at sina.com) Date: Fri, 20 Jan 2017 09:00:57 +0800 Subject: =?UTF-8?Q?=E5=9B=9E=E5=A4=8D=EF=BC=9ARe=3A_why_=22connection=3A_close=22_h?= =?UTF-8?Q?eader_is_added_when_the_request_is_passed_to_upstream_server=3F?= Message-ID: <20170120010057.A06D710200E3@webmail.sinamail.sina.com.cn> Hi, Maxim, You are right. Connection is hop-by-hop header. At present I add the line below into nginx.conf: proxy-set-headers Connection $http_connection That solve my issue. Thanks Liu Peng ----- ???? ----- ????Maxim Dounin ????nginx at nginx.org ???Re: why "connection: close" header is added when the request is passed to upstream server? ???2017?01?19? 21?23? Hello! On Thu, Jan 19, 2017 at 09:02:34AM +0800, tjlp at sina.com wrote: > I use Nginx in the Kubernetes. With the upstream server log, I > find that the header "connection: close" is added when the > request is passed to upstream server. Why? Because the connection between nginx and the upstream server is a separate connection, and by default nginx isn't going to keep it alive. Keepalive connections to upstream servers can be configured as documented here: http://nginx.org/r/keepalive > What I hope is the original header relating to connection status > should be passed to upstream server without any change. That > means: if the original request header has no connection header, > "connection" header should not be added. The connection between nginx and the backend server is a completely separate connection, and it is expected to have it's own hop-by-hop headers, see RFC 2616 here: https://tools.ietf.org/html/rfc2616#section-13.5.1 -- Maxim Dounin http://nginx.org/ _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From brunner at novatrend.ch Fri Jan 20 07:09:35 2017 From: brunner at novatrend.ch (Michael Brunner) Date: Fri, 20 Jan 2017 08:09:35 +0100 (CET) Subject: Real IP in header for SMTP Nginx Mail Proxy Message-ID: <841844515.26921.1484896175273@ox1.tophost.ch> Hi I used the instruction below to build a mail proxy server with nginx: https://www.nginx.com/resources/admin-guide/mail-proxy/ I configured it for IMAP, POP3 and SMTP. It's working quite well but I don't see the real IP of the sender in the mail header. When a user is sending a mail via SMTP, I see only the following in the mail header: X-Sender-Id: _forwarded-from|46.xxx.xx.xx For me it's important to see the real IP of the sender in the mail header, otherwise our outgoing spam filter will not like this solution. How can I add the real IP of the sender to the mail header? Regards Michael From nmilas at noa.gr Fri Jan 20 09:16:48 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Fri, 20 Jan 2017 11:16:48 +0200 Subject: How to log internal location evaluation In-Reply-To: References: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> Message-ID: On 19/1/2017 10:27 ??, Peter Booth wrote: > You can get all that and a lot, lot more if you build a debug enabled version of nginx Thank you Peter, As I am on CentOS 6 and I am using the nginx repo, I have installed: nginx-1.10.2-1.el6.ngx.x86_64 nginx-debuginfo-1.10.2-1.el6.ngx.x86_64 I guess I should be OK with these? Please, let me know how I can get the details I need. Which commands and/or configuration do I need? Thanks, Nick From nmilas at noa.gr Fri Jan 20 10:15:30 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Fri, 20 Jan 2017 12:15:30 +0200 Subject: Questions about root and alias directives Message-ID: <9b9fab26-b57c-312e-b853-79b11fe9b866@noa.gr> Hello, I need a config which includes multiple different physical paths. So I have: server { listen [::]:80; ... root /var/webs/wwwmain/www/; index index.php index.html index.htm; ... location / { try_files $uri $uri/ /index.php?$args; } location /museum/ { root /var/webs/wwwmuseum/; } ... } Now, when I request "http://www.example.com/museum/", the above config produces a request for the following path: /var/webs/wwwmuseum/museum/index.php I think this is the expected result, according to the documentation. If I change the last part to use an alias directive: location /museum/ { alias /var/webs/wwwmuseum/; } then the evaluated path becomes: /var/webs/wwwmain/www/museum/index.html The alias directive does not seem to have any effect. (Why is that so?) So, in both cases, I cannot achieve the *desired* path which is: /var/webs/wwwmuseum/index.html How should I do it? Thanks in advance, Nick From mdounin at mdounin.ru Fri Jan 20 10:59:50 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 20 Jan 2017 13:59:50 +0300 Subject: Real IP in header for SMTP Nginx Mail Proxy In-Reply-To: <841844515.26921.1484896175273@ox1.tophost.ch> References: <841844515.26921.1484896175273@ox1.tophost.ch> Message-ID: <20170120105950.GA18587@mdounin.ru> Hello! On Fri, Jan 20, 2017 at 08:09:35AM +0100, Michael Brunner wrote: > > Hi > > I used the instruction below to build a mail proxy server with nginx: > > https://www.nginx.com/resources/admin-guide/mail-proxy/ > > I configured it for IMAP, POP3 and SMTP. > > It's working quite well but I don't see the real IP of the sender in the mail header. When a user is sending a mail via SMTP, I see only the following in the mail header: > > X-Sender-Id: _forwarded-from|46.xxx.xx.xx > > For me it's important to see the real IP of the sender in the mail header, otherwise our outgoing spam filter will not like this solution. > > How can I add the real IP of the sender to the mail header? You can pass the client IP address to the SMTP backend using the "xclient" directive, see http://nginx.org/r/xclient. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri Jan 20 12:44:42 2017 From: nginx-forum at forum.nginx.org (coolmike) Date: Fri, 20 Jan 2017 07:44:42 -0500 Subject: Real IP in header for SMTP Nginx Mail Proxy In-Reply-To: <20170120105950.GA18587@mdounin.ru> References: <20170120105950.GA18587@mdounin.ru> Message-ID: <6e664412c0c49ca285ee05b91be69757.NginxMailingListEnglish@forum.nginx.org> Hi Thanks a lot for your answer. This sounds great, but when I enable it, I just get the following at my exim mailserver log: 2017-01-20 13:42:00 SMTP connection from [46.xx.xx.xx]:54087 (TCP/IP connection count = 1) 2017-01-20 13:42:00 SMTP connection from (mailproxy.xxxx.com) [46.xxx.xx.xx]:54087 lost I don't know why the connection is lost when I enable xclient. Do I need a special configuration for my exim mailserver when I use xclient? Regards Michael Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272068,272074#msg-272074 From vbart at nginx.com Fri Jan 20 16:30:05 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 20 Jan 2017 19:30:05 +0300 Subject: Questions about root and alias directives In-Reply-To: <9b9fab26-b57c-312e-b853-79b11fe9b866@noa.gr> References: <9b9fab26-b57c-312e-b853-79b11fe9b866@noa.gr> Message-ID: <1710201.M6ig41qEBa@vbart-workstation> On Friday 20 January 2017 12:15:30 Nikolaos Milas wrote: > Hello, > > I need a config which includes multiple different physical paths. > > So I have: > > server { > > listen [::]:80; > ... > root /var/webs/wwwmain/www/; > > index index.php index.html index.htm; > > ... > > location / { > try_files $uri $uri/ /index.php?$args; > } > > location /museum/ { > root /var/webs/wwwmuseum/; > } > > ... > > } > > Now, when I request "http://www.example.com/museum/", the above config > produces a request for the following path: > > /var/webs/wwwmuseum/museum/index.php > > I think this is the expected result, according to the documentation. > > If I change the last part to use an alias directive: > > location /museum/ { > alias /var/webs/wwwmuseum/; > } > > then the evaluated path becomes: > > /var/webs/wwwmain/www/museum/index.html > > The alias directive does not seem to have any effect. (Why is that so?) > [..] The answer to your question is likely in the parts of the configuration that you've skipped. Please note, that the "index" directive produces internal redirects. In the second case, it looks like your request is redirected to "/museum/index.html" and then handled by some other location. wbr, Valentin V. Bartenev From nmilas at noa.gr Fri Jan 20 21:07:31 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Fri, 20 Jan 2017 23:07:31 +0200 Subject: How to log internal location evaluation In-Reply-To: References: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> Message-ID: <6355f2b1-e67a-2410-28a9-4364f9115a88@noa.gr> On 20/1/2017 11:16 ??, Nikolaos Milas wrote: > As I am on CentOS 6 and I am using the nginx repo, I have installed: > > nginx-1.10.2-1.el6.ngx.x86_64 > nginx-debuginfo-1.10.2-1.el6.ngx.x86_64 > > I guess I should be OK with these? It seems I am not. :-( I tried "nginx -V" and I didn't see the required option (--with-debug). Can I find in nginx repo(s) any compiled (RPM) versions of nginx --with-debug enabled for CentOS 7? It would make an admn's life a lot easier, at this very busy point in life... Please advise! Thanks a lot, Nick From nginx-forum at forum.nginx.org Fri Jan 20 21:33:59 2017 From: nginx-forum at forum.nginx.org (spockdude) Date: Fri, 20 Jan 2017 16:33:59 -0500 Subject: Can Nginx route SMTP based on login credentials? Message-ID: I have several users using the same mail host (smtp.example.com) to send outbound email using authentication. I would like to split it up into several outbound servers and assign specific users to specific mail servers without having to change the mail host information in each user's email client. Can Nginx do this or does it always just spread that traffic between all backend servers? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272088,272088#msg-272088 From peter_booth at me.com Fri Jan 20 21:41:37 2017 From: peter_booth at me.com (Peter Booth) Date: Fri, 20 Jan 2017 16:41:37 -0500 Subject: How to log internal location evaluation In-Reply-To: <6355f2b1-e67a-2410-28a9-4364f9115a88@noa.gr> References: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> <6355f2b1-e67a-2410-28a9-4364f9115a88@noa.gr> Message-ID: I've always had to configure and build debug versions myself - and usually I want them to coexist in parallel with an existing production nginx install. But this link suggests otherwise: http://nginx.org/en/docs/debugging_log.html You'll be overwhelmed by the volume of output. It gave me a real appreciation for the subtlety, power, and necessary complexity of nginx and the technical skills of the development team. I would also strongly recommend the openresty bundle of nginx, which includes many powerful modules that turn nginx into a Swiss Army knife of http. Sent from my iPhone > On Jan 20, 2017, at 4:07 PM, Nikolaos Milas wrote: > >> On 20/1/2017 11:16 ??, Nikolaos Milas wrote: >> >> As I am on CentOS 6 and I am using the nginx repo, I have installed: >> >> nginx-1.10.2-1.el6.ngx.x86_64 >> nginx-debuginfo-1.10.2-1.el6.ngx.x86_64 >> >> I guess I should be OK with these? > > It seems I am not. :-( > > I tried "nginx -V" and I didn't see the required option (--with-debug). > > Can I find in nginx repo(s) any compiled (RPM) versions of nginx --with-debug enabled for CentOS 7? > > It would make an admn's life a lot easier, at this very busy point in life... > > Please advise! > > Thanks a lot, > Nick > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nmilas at noa.gr Fri Jan 20 22:37:36 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Sat, 21 Jan 2017 00:37:36 +0200 Subject: How to log internal location evaluation In-Reply-To: References: <898f4bf8-f157-a5e4-8977-30f75293ead5@noa.gr> <6355f2b1-e67a-2410-28a9-4364f9115a88@noa.gr> Message-ID: <12c1b94e-132c-f9e6-9952-14ecf6a70910@noa.gr> On 20/1/2017 11:41 ??, Peter Booth wrote: > But this link suggests otherwise:http://nginx.org/en/docs/debugging_log.html Wow! Didn't know about that! Indeed, my installation includes nginx-debug!! I've tried it and it works fine! My digging (in debugging) starts now.... Wish me a good understanding... Thank you very much! Nick From nginx-forum at forum.nginx.org Sun Jan 22 10:34:06 2017 From: nginx-forum at forum.nginx.org (xstation) Date: Sun, 22 Jan 2017 05:34:06 -0500 Subject: internal error 500 Message-ID: <70e3748452cc4a79a40bbc9f5d52cbe3.NginxMailingListEnglish@forum.nginx.org> I'm aware of the locations of index HTML on my server but the problem is that Nginx by default looks in this location in Debian 8 the thing is if I have something like: I create this link as root ln -s /home/echolot/echolot/results /var/www/echolot and this link as echolot user ln -sf echolot.html index.html Now the problem is when I go to mydomain/echolot, I get this internal error page What am I doing wrong and how can I possibly resolve it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272101,272101#msg-272101 From nginx-forum at forum.nginx.org Sun Jan 22 21:36:57 2017 From: nginx-forum at forum.nginx.org (Hronom) Date: Sun, 22 Jan 2017 16:36:57 -0500 Subject: Nginx reverse proxy crash when dns unavailable In-Reply-To: References: <932ea6c90910221835s133b7c97nce9c9d488ff364d7@mail.gmail.com> Message-ID: <18edc31dbca7d6b2467e8d2662bf622e.NginxMailingListEnglish@forum.nginx.org> Any progress on this? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,15995,272106#msg-272106 From nginx-forum at forum.nginx.org Mon Jan 23 09:15:05 2017 From: nginx-forum at forum.nginx.org (tokers) Date: Mon, 23 Jan 2017 04:15:05 -0500 Subject: Beginner question:Nginx request_uri meaning ? In-Reply-To: References: Message-ID: $request_uri is a built-in variable of Nginx, which meaning is the pure uri in HTTP request line without any process. For instance, if the HTTP request line is "GET /path/to//a.jpg HTTP/1.1", the $request_uri is "/path/to//a.jpg". But the $uri will be "/path/to/a.jpg" if merge_slashes is enable. if the HTTP request line is "GET /path%2Fto/a.jpg HTTP1/1.", the $request_uri is "/path%2Fto/a.jpg". But the $uri will be "/path/to/a.jpg". Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271937,272108#msg-272108 From reallfqq-nginx at yahoo.fr Mon Jan 23 19:50:35 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 23 Jan 2017 20:50:35 +0100 Subject: ssl_protocols & SNI In-Reply-To: References: <20170119133655.GK45866@mdounin.ru> Message-ID: Any help? --- *B. R.* On Thu, Jan 19, 2017 at 7:07 PM, B.R. wrote: > There is something strange, though. > > I configured cipher suites with ssl_ciphers with suites from TLSv1.0 & > TLSv1.2 (TLSv1.1 having no specific cipher suites but merely relying on > thos from TLSv1.0). > Those 3 protocols can be tested successfully when ssl_protocols is at its > default value (TLSv1 TLSv1.1 TLSv1.2 since nginx v1.9.1). > However, trying to remove TLSv1 (thus using TLSv1.1 TLSv1.2 for those who > are following ^^), I cannot connect using neither TLSv1.0 nor TLSv1.1, only > with TLSv1.2 a connection can be established. > > I am probably overlooking something... What is it? > --- > *B. R.* > > On Thu, Jan 19, 2017 at 3:28 PM, B.R. wrote: > >> I acknowledge how that works, although OpenSSL providing more flexibility >> over SNI for protocols supporting it would have been appreciated. Too bad. >> Thanks Maxim for you always concise and straightforward discerning >> answers! >> --- >> *B. R.* >> >> On Thu, Jan 19, 2017 at 2:36 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Thu, Jan 19, 2017 at 10:04:46AM +0100, B.R. via nginx wrote: >>> >>> > Hello, >>> > >>> > I tried to overload the value of my default ssl_protocols (http block >>> > level) in a server block. >>> > It did not seem to apply the other value in this virtuel server only. >>> > >>> > Since I use SNI on my OpenSSL implementation, which perfectly works to >>> > support multiple virtual servers, I wonder why this SNI capability >>> isn't >>> > leveraged to apply different TLS environment depending on the SNI >>> value and >>> > the TLS directives configured for the virtual server of the asked >>> domain. >>> > Can SNI be used for other TLS configuration directives other than >>> > certificates? >>> > >>> > More generally, is it normal you cannot overload directives such as >>> > ssl_protocols or ssl_ciphers in a specific virtual server, using the >>> same >>> > socket as others? >>> > If positive, would it be possible to use SNI to tweak TLS connections >>> > envrionment depending on domain? >>> >>> You can overload ssl_ciphers. You can't overload ssl_protocols >>> because OpenSSL works this way: it selects the protocol used >>> before SNI callback (and this behaviour looks more or less natural >>> beacause the existance of SNI depends on the protocol used, and, >>> for example, you can't enable SSLv3 in a SNI-based virtual host). >>> >>> In general, whether or not some SSL feature can be tweaked for >>> SNI-based virtual hosts depends on two factors: >>> >>> - if it's at all possible; >>> - how OpenSSL handles it. >>> >>> In some cases nginx also tries to provide per-virtualhost support >>> even for things OpenSSL doesn't handle natively, e.g., ssl_verify, >>> ssl_verify_depth, ssl_prefer_server_ciphers. >>> >>> -- >>> Maxim Dounin >>> http://nginx.org/ >>> >> >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ryan at bbnx.net Tue Jan 24 02:14:37 2017 From: ryan at bbnx.net (Ryan A. Krenzischek) Date: Mon, 23 Jan 2017 21:14:37 -0500 Subject: nginx.org IPv6 Server 2001:1af8:4060:a004:21::e3 returning empty response Message-ID: Sirs: I'm not sure who is responsible for the web server on 2001:1af8:4060:a004:21::e3 but it's not serving up HTTP properly for nginx.org over IPv6. The server on 2606:7100:1:69::3f is working just fine but for some reason Chrome is preferring the 2001:1af8:4060:a004:21::e3 address first. If this email needs to be addressed elsewhere, please let me know. > telnet 2001:1af8:4060:a004:21::e3 80 Trying 2001:1af8:4060:a004:21::e3... Connected to 2001:1af8:4060:a004:21::e3. Escape character is '^]'. GET / HTTP/1.1 Host: nginx.org Connection: Close Connection closed by foreign host. Where as: > telnet 2606:7100:1:69::3f 80 Trying 2606:7100:1:69::3f... Connected to 2606:7100:1:69::3f. Escape character is '^]'. GET / HTTP/1.1 Host: nginx.org Connection: Close HTTP/1.1 200 OK Server: nginx/1.11.7 Date: Tue, 24 Jan 2017 02:12:45 GMT Content-Type: text/html; charset=utf-8 Content-Length: 8101 Last-Modified: Thu, 12 Jan 2017 18:52:13 GMT Connection: close ETag: "5877d05d-1fa5" Accept-Ranges: bytes [..cut..] Thanks, Ryan From nginx-forum at forum.nginx.org Tue Jan 24 08:05:19 2017 From: nginx-forum at forum.nginx.org (forumacct) Date: Tue, 24 Jan 2017 03:05:19 -0500 Subject: Proper coding of location for php and proxy_pass Message-ID: <45c639683cdfd55e8b1d6454480cb54d.NginxMailingListEnglish@forum.nginx.org> Hello All, What is the correct coding of location section(s) for 2 Raspberry with nginx. RPI1 is the main webpage and also is going to host owncloud (which uses https and php) RPI2 is a weather station using plain http html and some php. To catch requests to RPI2 I can use a text pattern in the URL. 'rpi' Now my problem is that the RPI2 has some php and the RPI1 with owncloud has php. I have tried 2 location segments but it seems the php always wins. What is the correct way to code this? Here's a tuned down version of the default config that shows the problem. Just now I have commented out the php. This way 'owncloud is disabled but all URLs with 'rpi' (including those with php) are send to RPI2. server { listen 8000; ## listen for ipv4; this line is default and implied #listen [::]:80 default_server ipv6only=on; ## listen for ipv6 root /media/usbstick/nginx/www; index index.php index.html index.htm; # Make site accessible from http://localhost/ server_name rpi3; location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. try_files $uri $uri/ /index.html; # Uncomment to enable naxsi on this location # include /etc/nginx/naxsi.rules ssi on; } # Pass /rpi Weather station to RPI1 location /rpi { proxy_pass http://192.168.11.170:80; } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { #fastcgi_split_path_info ^(.+\.php)(/.+)$; ##fastcgi_pass unix:/var/run/php5-fpm.sock; #fastcgi_pass 127.0.0.1:9000; #fastcgi_index index.php; #include fastcgi_params; #} } Here is a version of the full config which does not work for the weather station. upstream php-handler { server 127.0.0.1:9000; #server unix:/var/run/php5-fpm.sock; } server { listen 8000; #server_name localhost; server_name xxxxxxx.dyndns.ws; return 301 https://$server_name$request_uri; # enforce https } server { listen 443 ssl; #server_name localhost; server_name xxxxxxx.dyndns.ws; add_header Strict-Transport-Security "max-age=31536000; includeSubDomains"; ssl_certificate /etc/letsencrypt/live/xxxxxxx.dyndns.ws/cert.pem; ssl_certificate_key /etc/letsencrypt/live/xxxxxxx.dyndns.ws/privkey.pem; # Path to the root of your installation root /media/usbstick/nginx/www/owncloud; client_max_body_size 2000M; # set max upload size fastcgi_buffers 64 4K; rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect; rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect; rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect; index index.php; error_page 403 /core/templates/403.php; error_page 404 /core/templates/404.php; location = /robots.txt { allow all; log_not_found off; access_log off; } location ~ ^/(?:\.htaccess|data|config|db_structure\.xml|README) { deny all; } location / { # The following 2 rules are only needed with webfinger rewrite ^/.well-known/host-meta /public.php?service=host-meta last; rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last; rewrite ^/.well-known/carddav /remote.php/carddav/ redirect; rewrite ^/.well-known/caldav /remote.php/caldav/ redirect; rewrite ^(/core/doc/[^\/]+/)$ $1/index.html; try_files $uri $uri/ index.php; } location ~ \.php(?:$|/) { fastcgi_split_path_info ^(.+\.php)(/.+)$; include fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param HTTPS on; fastcgi_pass php-handler; } # Optional: set long EXPIRES header on static assets location ~* \.(?:jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ { expires 30d; # Optional: Don't log access to assets access_log off; } # Pass /rpi Weather station to RPI1 location /rpi { proxy_pass http://192.168.11.170:80; } } Thanks for helping, Gert Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272114,272114#msg-272114 From mostolog at gmail.com Tue Jan 24 09:20:37 2017 From: mostolog at gmail.com (mostolog at gmail.com) Date: Tue, 24 Jan 2017 10:20:37 +0100 Subject: Using variables on configuration (map?) for regex In-Reply-To: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> References: <0cf784ba-b922-97b5-ba17-063e1ebf5014@gmail.com> Message-ID: <4580981c-ba7a-ffde-40e3-5eb0a8d08984@gmail.com> Hi Months ago I was trying to setup a nginx-CAS environment and found this issue (I'll explain below). Now, I have found it again, but this time seems not so easy to workaround. Scenario: Client connects to Apache which forwards to Apereo CAS server and authenticate the user. Once authenticated, Apache reverse proxy NGINX with a http header in the request which contains the list of groups the user is member of. To sum up: nginx knows the user, has a comma separated list of groups, and the location the user requested to browse. In the past event, I had ~200 Group/URLs I wanted to protect, and tried to: map $request_method:$http_groups:$request_uri $denied { default 1; ~^GET:group$group:/$group 0; } sadly, map does not expand the left side of the statement, so I couldn't do that and ended doing: map $request_method:$http_groups:$request_uri $denied { default 1; ~^GET:group1:/group1 0; ~^GET:group2:/group2 0; ... 200 lines ... } As previously said, today I'm having the same issue, but this time the predefined group list is not known. Actually, a user creates a "chat room" and only users from specified group list can join. As I could send this "new list" as a header to nginx: It is possible to compare two nginx variables to check if "$a do not contain $b"? Actually I'm usin regex backreferences to solve it. eg: $tmp="$var1:$var2" and $tmp ~ "(.*):\1" Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 24 10:31:55 2017 From: nginx-forum at forum.nginx.org (iHoody) Date: Tue, 24 Jan 2017 05:31:55 -0500 Subject: Files over ~5.5.MB not uploaded only over SSL Message-ID: <8fb285a3feed90b3e46ecccd92264811.NginxMailingListEnglish@forum.nginx.org> Hello, I had a server set up on http, however I wanted to secure it so changed this to https and enabled ssl on port 443. However, this has caused an issue whereby I can no longer upload files larger than ~5.5MB if SSL is enabled. If I remove nginx and let Sails do the upload over SSL it works fine. If I enable nginx but disable SSL, the upload works fine. I have used the directive client_max_body_size 0; throughout http, server, and location, however this has made no impact. Are there any other things that I should check? Links to my nginx.conf and sites-available\default file below: http://www.filedropper.com/default_11 http://www.filedropper.com/nginx_1 Any help would be much appreciated. Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272118,272118#msg-272118 From sb at nginx.com Tue Jan 24 11:02:41 2017 From: sb at nginx.com (Sergey Budnevitch) Date: Tue, 24 Jan 2017 14:02:41 +0300 Subject: nginx.org IPv6 Server 2001:1af8:4060:a004:21::e3 returning empty response In-Reply-To: References: Message-ID: <009A4169-2DAC-4E57-B6C1-FE2CF2174A6C@nginx.com> > On 24 Jan 2017, at 05:14, Ryan A. Krenzischek wrote: > > Sirs: > > I'm not sure who is responsible for the web server on 2001:1af8:4060:a004:21::e3 but it's not serving up HTTP properly for nginx.org over IPv6. The server on 2606:7100:1:69::3f is working just fine but for some reason Chrome is preferring the 2001:1af8:4060:a004:21::e3 address first. If this email needs to be addressed elsewhere, please let me know. Well, I check from two different locations and it works, and I see ipv6 traffic on this server. Could you please send me off list: a) your ipv6 src ip address, so I can find it in logs b) traceroute6 2001:1af8:4060:a004:21::e3 and traceroute6 2606:7100:1:69::3f > > > telnet 2001:1af8:4060:a004:21::e3 80 > Trying 2001:1af8:4060:a004:21::e3... > Connected to 2001:1af8:4060:a004:21::e3. > Escape character is '^]'. > GET / HTTP/1.1 > Host: nginx.org > Connection: Close > > > Connection closed by foreign host. > > Where as: > > > telnet 2606:7100:1:69::3f 80 > Trying 2606:7100:1:69::3f... > Connected to 2606:7100:1:69::3f. > Escape character is '^]'. > GET / HTTP/1.1 > Host: nginx.org > Connection: Close > > HTTP/1.1 200 OK > Server: nginx/1.11.7 > Date: Tue, 24 Jan 2017 02:12:45 GMT > Content-Type: text/html; charset=utf-8 > Content-Length: 8101 > Last-Modified: Thu, 12 Jan 2017 18:52:13 GMT > Connection: close > ETag: "5877d05d-1fa5" > Accept-Ranges: bytes > [..cut..] > > Thanks, > > Ryan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Tue Jan 24 14:21:02 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 Jan 2017 17:21:02 +0300 Subject: nginx-1.11.9 Message-ID: <20170124142102.GC24349@mdounin.ru> Changes with nginx 1.11.9 24 Jan 2017 *) Bugfix: nginx might hog CPU when using the stream module; the bug had appeared in 1.11.5. *) Bugfix: EXTERNAL authentication mechanism in mail proxy was accepted even if it was not enabled in the configuration. *) Bugfix: a segmentation fault might occur in a worker process if the "ssl_verify_client" directive of the stream module was used. *) Bugfix: the "ssl_verify_client" directive of the stream module might not work. *) Bugfix: closing keepalive connections due to no free worker connections might be too aggressive. Thanks to Joel Cunningham. *) Bugfix: an incorrect response might be returned when using the "sendfile" directive on FreeBSD and macOS; the bug had appeared in 1.7.8. *) Bugfix: a truncated response might be stored in cache when using the "aio_write" directive. *) Bugfix: a socket leak might occur when using the "aio_write" directive. -- Maxim Dounin http://nginx.org/ From thangamani.rect at gmail.com Wed Jan 25 00:01:18 2017 From: thangamani.rect at gmail.com (Thangamani J) Date: Tue, 24 Jan 2017 16:01:18 -0800 Subject: Rate limiting by percentage Message-ID: Hi team, I'm in process of implementing following use case, please help me with your inputs. let's say I have endpoint /rate-limit and it should 75% of time, it should return "SUCCESS" as text 25% of time, it should return "FAILED" as text I did try limit_req_zone which goes by request per second and when exceeds it responds with status code 503. But in this case i want an actual response and the percentage of FAILED/SUCCESS to be configurable. Any pointers for above use case would be helpful. Thanks, Thanga. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 25 07:05:32 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 25 Jan 2017 02:05:32 -0500 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: References: Message-ID: <657889ac1444c370f24523f7505996d5.NginxMailingListEnglish@forum.nginx.org> Hi, We have used tmpfs to mount frequently used media application cache serving. We have also mounted rest applications on disk where we are facing similar issues of max_size being breached . Please find below details for your reference : # du -sh /cache/* 245G /cache/12007 161G /cache/12152 # grep a12007 /etc/nginx/nginx.conf proxy_cache_path /cache/12007 keys_zone=a12007:200m levels=1:2 max_size=200g inactive=10d; # grep a12152 /etc/nginx/nginx.conf proxy_cache_path /cache/12152 keys_zone=a12152:200m levels=1:2 max_size=100g inactive=10d; # df -Th Filesystem Type Size Used Avail Use% Mounted on /dev/mapper/vg_os-lv_root ext4 99G 1.5G 92G 2% / devtmpfs devtmpfs 756G 0 756G 0% /dev tmpfs tmpfs 756G 0 756G 0% /dev/shm tmpfs tmpfs 756G 50M 756G 1% /run tmpfs tmpfs 756G 0 756G 0% /sys/fs/cgroup /dev/mapper/vg_os-lv_usr ext4 99G 1.4G 92G 2% /usr /dev/sda1 ext4 477M 106M 343M 24% /boot /dev/mapper/vg_cache-lv_cache ext4 1.5T 1.4T 0 100% /cache Please let us what went wrong here and how this can be corrected. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,272136#msg-272136 From nginx-forum at forum.nginx.org Wed Jan 25 07:17:28 2017 From: nginx-forum at forum.nginx.org (omkar_jadhav_20) Date: Wed, 25 Jan 2017 02:17:28 -0500 Subject: nginx cache mounted on tmpf getting fulled In-Reply-To: <657889ac1444c370f24523f7505996d5.NginxMailingListEnglish@forum.nginx.org> References: <657889ac1444c370f24523f7505996d5.NginxMailingListEnglish@forum.nginx.org> Message-ID: omkar_jadhav_20 Wrote: ------------------------------------------------------- > Hi, > > We have used tmpfs to mount frequently used media application cache > serving. We have also mounted rest applications on disk where we are > facing similar issues of max_size being breached . Please find below > details for your reference : > > # du -sh /cache/* > 245G /cache/12007 > 161G /cache/12152 > > # grep a12007 /etc/nginx/nginx.conf > proxy_cache_path /cache/12007 keys_zone=a12007:200m levels=1:2 > max_size=200g inactive=10d; > # grep a12152 /etc/nginx/nginx.conf > proxy_cache_path /cache/12152 keys_zone=a12152:200m levels=1:2 > max_size=100g inactive=10d; > > # df -Th > Filesystem Type Size Used Avail Use% Mounted > on > /dev/mapper/vg_os-lv_root ext4 99G 1.5G 92G 2% / > devtmpfs devtmpfs 756G 0 756G 0% /dev > tmpfs tmpfs 756G 0 756G 0% > /dev/shm > tmpfs tmpfs 756G 50M 756G 1% /run > tmpfs tmpfs 756G 0 756G 0% > /sys/fs/cgroup > /dev/mapper/vg_os-lv_usr ext4 99G 1.4G 92G 2% /usr > /dev/sda1 ext4 477M 106M 343M 24% /boot > /dev/mapper/vg_cache-lv_cache ext4 1.5T 1.4T 0 100% /cache > > Please let us what went wrong here and how this can be corrected. In addition we are getting 90% of cache hit ratio. as suggested by Steve , will just making below mounting option changes in fstab will resolve this issue or I am missing/misscalculating anything : nodev,noexec,nosuid,noatime,async,size=xxxM,mode=0755,uid=xx,gid=xx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,271842,272137#msg-272137 From kworthington at gmail.com Wed Jan 25 15:25:32 2017 From: kworthington at gmail.com (Kevin Worthington) Date: Wed, 25 Jan 2017 10:25:32 -0500 Subject: [nginx-announce] nginx-1.11.9 In-Reply-To: <20170124142110.GD24349@mdounin.ru> References: <20170124142110.GD24349@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.9 for Windows https://kevinworthington.com/nginxwin1119 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jan 24, 2017 at 9:21 AM, Maxim Dounin wrote: > Changes with nginx 1.11.9 24 Jan > 2017 > > *) Bugfix: nginx might hog CPU when using the stream module; the bug > had > appeared in 1.11.5. > > *) Bugfix: EXTERNAL authentication mechanism in mail proxy was accepted > even if it was not enabled in the configuration. > > *) Bugfix: a segmentation fault might occur in a worker process if the > "ssl_verify_client" directive of the stream module was used. > > *) Bugfix: the "ssl_verify_client" directive of the stream module might > not work. > > *) Bugfix: closing keepalive connections due to no free worker > connections might be too aggressive. > Thanks to Joel Cunningham. > > *) Bugfix: an incorrect response might be returned when using the > "sendfile" directive on FreeBSD and macOS; the bug had appeared in > 1.7.8. > > *) Bugfix: a truncated response might be stored in cache when using the > "aio_write" directive. > > *) Bugfix: a socket leak might occur when using the "aio_write" > directive. > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 25 16:13:16 2017 From: nginx-forum at forum.nginx.org (spockdude) Date: Wed, 25 Jan 2017 11:13:16 -0500 Subject: Can Nginx route SMTP based on login credentials? In-Reply-To: References: Message-ID: <9a99d1326194a9faf994e383618c9940.NginxMailingListEnglish@forum.nginx.org> spockdude Wrote: ------------------------------------------------------- > I have several users using the same mail host (smtp.example.com) to > send outbound email using authentication. I would like to split it up > into several outbound servers and assign specific users to specific > mail servers without having to change the mail host information in > each user's email client. Can Nginx do this or does it always just > spread that traffic between all backend servers? It seems this list is not very active, but I'll go ahead and answer my own question. It *does* appear that the above functionality is supported according to this page: https://www.nginx.com/resources/admin-guide/mail-proxy/ It says "If authentication is successful, the authentication server will choose an upstream server and redirect the request." If I'm understanding this correctly, I just need to create an authentication server that instructs Nginx which upstream server to send the request to based on the user's credentials. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272088,272141#msg-272141 From francis at daoine.org Wed Jan 25 19:03:38 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 25 Jan 2017 19:03:38 +0000 Subject: Can Nginx route SMTP based on login credentials? In-Reply-To: <9a99d1326194a9faf994e383618c9940.NginxMailingListEnglish@forum.nginx.org> References: <9a99d1326194a9faf994e383618c9940.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170125190338.GV2958@daoine.org> On Wed, Jan 25, 2017 at 11:13:16AM -0500, spockdude wrote: > spockdude Wrote: Hi there, > > Can Nginx do this or does it always just > > spread that traffic between all backend servers? (There are no backend servers configured in nginx, for mail.) > https://www.nginx.com/resources/admin-guide/mail-proxy/ > > It says "If authentication is successful, the authentication server will > choose an upstream server and redirect the request." > > If I'm understanding this correctly, I just need to create an authentication > server that instructs Nginx which upstream server to send the request to > based on the user's credentials. Correct. auth_http is documented at http://nginx.org/r/auth_http, which also shows the authentication protocol. nginx does not care how the authentication is done, or how the ip:port to next connect to is chosen; it just connects to what it is told to by the response of the auth_http request that it makes. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed Jan 25 19:09:37 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 25 Jan 2017 19:09:37 +0000 Subject: Rate limiting by percentage In-Reply-To: References: Message-ID: <20170125190937.GW2958@daoine.org> On Tue, Jan 24, 2017 at 04:01:18PM -0800, Thangamani J wrote: Hi there, > I'm in process of implementing following use case, please help me with your > inputs. > > let's say I have endpoint /rate-limit and it should > 75% of time, it should return "SUCCESS" as text > 25% of time, it should return "FAILED" as text That doesn't sound like anything to do with rate limiting to me. It sounds more like load balancing, where you want one case to receive three times the load of the other. On that basis, if it must be done purely in nginx.conf, I'd probably use "upstream" with different "server" directives with different "weight" arguments; and have the individual servers just "return 200 SUCCESS\n" or "return 200 FAILURE\n". The main directives there are documented from http://nginx.org/r/upstream (Although really, I probably wouldn't do it purely in nginx.conf.) f -- Francis Daly francis at daoine.org From iippolitov at nginx.com Thu Jan 26 07:32:18 2017 From: iippolitov at nginx.com (Igor A. Ippolitov) Date: Thu, 26 Jan 2017 10:32:18 +0300 Subject: Rate limiting by percentage In-Reply-To: References: Message-ID: Hello, Have a look at split_clients which does something similar: http://nginx.org/en/docs/http/ngx_http_split_clients_module.html You can use resulting variable to select upstream, response or whatever you need. On 25.01.2017 03:01, Thangamani J wrote: > Hi team, > > I'm in process of implementing following use case, please help me with > your inputs. > > let's say I have endpoint /rate-limit and it should > 75% of time, it should return "SUCCESS" as text > 25% of time, it should return "FAILED" as text > > I did try limit_req_zone which goes by request per second and when > exceeds it responds with status code 503. But in this case i want an > actual response and the percentage of FAILED/SUCCESS to be configurable. > > Any pointers for above use case would be helpful. > > Thanks, > Thanga. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 26 07:41:24 2017 From: nginx-forum at forum.nginx.org (coolmike) Date: Thu, 26 Jan 2017 02:41:24 -0500 Subject: Mail Proxy with xclient - connection lost Message-ID: <07fc812b8c59ee426bb67e1566625c86.NginxMailingListEnglish@forum.nginx.org> Hi I try to build a Mail proxy with nginx. It works already quite well, but I need the real IP addresses in the header. Therefore I tried to use xclient. But when I enable it, I just get the following at my exim mailserver log: 2017-01-20 13:42:00 SMTP connection from [46.xx.xx.xx]:54087 (TCP/IP connection count = 1) 2017-01-20 13:42:00 SMTP connection from (mailproxy.xxxx.com) [46.xxx.xx.xx]:54087 lost I don't know why the connection is lost when I enable xclient. Do I need a special configuration for my exim mailserver when I use xclient? Regards Michael Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272145,272145#msg-272145 From nginx-forum at forum.nginx.org Thu Jan 26 07:41:29 2017 From: nginx-forum at forum.nginx.org (coolmike) Date: Thu, 26 Jan 2017 02:41:29 -0500 Subject: Mail Proxy with xclient - connection lost Message-ID: <9a2ab82ee38be5a10196a54401e4a02b.NginxMailingListEnglish@forum.nginx.org> Hi I try to build a Mail proxy with nginx. It works already quite well, but I need the real IP addresses in the header. Therefore I tried to use xclient. But when I enable it, I just get the following at my exim mailserver log: 2017-01-20 13:42:00 SMTP connection from [46.xx.xx.xx]:54087 (TCP/IP connection count = 1) 2017-01-20 13:42:00 SMTP connection from (mailproxy.xxxx.com) [46.xxx.xx.xx]:54087 lost I don't know why the connection is lost when I enable xclient. Do I need a special configuration for my exim mailserver when I use xclient? Regards Michael Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272146,272146#msg-272146 From nginx-forum at forum.nginx.org Thu Jan 26 17:45:18 2017 From: nginx-forum at forum.nginx.org (dstromberg) Date: Thu, 26 Jan 2017 12:45:18 -0500 Subject: Don't want http 504's for idempotent requests Message-ID: <5d4a2dc3dd0fad5518428ef7f0b9875e.NginxMailingListEnglish@forum.nginx.org> Hi folks. I have a REST API behind nginx. Sometimes we get back an http 504 despite our software being up, and we don't want that, at least not for idempotent requests. We don't want 504's for idempotent requests even if it means waiting a while for a response. I'd look at the application to see what's taking too long, but it's actually happening for a very simple health check of the API sometimes - all the health check does is return a 200, nothing more. We tried: proxy_connect_timeout 600; proxy_send_timeout 600; proxy_read_timeout 600; send_timeout 600; proxy_next_upstream error; http { keepalive_timeout 65; # We have 8 upstream servers defined. } ...to no avail. Also, I tried upgrading our nginx to 1.10.2 , but that didn't appear to help - I'm still getting infrequent 504's. I'm convinced we're getting 504's in less than 10 minutes, suggesting those 600 second timeouts aren't working as intended. How can I eliminate http 504's for idempotent requests like GET? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272154,272154#msg-272154 From cj.wijtmans at gmail.com Thu Jan 26 21:05:19 2017 From: cj.wijtmans at gmail.com (Chris Wijtmans) Date: Thu, 26 Jan 2017 22:05:19 +0100 Subject: FastCGI sent in stderr: "Primary script unknown" Message-ID: I wish to call a php script when a file is not found which is "gated" from the rest. Meaning i dont want any php files executing in the public domain. No user can trigger any php file, only when a requested file is not found. What am i doing wrong? I tried searching the net a bit but could not find anything useful. root /home/blah/public; index index.html; location / { error_page 403 404 = @http_request; try_files $uri $uri.html $uri/ =404; } location @http_request { root /home/blah; include fastcgi.conf; fastcgi_pass unix:/var/run/php-fpm/blah.sock; fastcgi_param SCRIPT_FILENAME /home/blah/http_request.php; fastcgi_param SCRIPT_NAME /http_request.php; } 2017/01/26 21:56:42 [error] 19384#19384: *1480 FastCGI sent in stderr: "Primary script unknown" while r eading response header from upstream, client: blah, server: www.blah.*, request: "GET /test HTTP/2.0", upstream: "fastcgi://unix:/var/run/php-fpm/blah.sock:", host: "www.blah.nl" From cj.wijtmans at gmail.com Thu Jan 26 21:12:32 2017 From: cj.wijtmans at gmail.com (Chris Wijtmans) Date: Thu, 26 Jan 2017 22:12:32 +0100 Subject: FastCGI sent in stderr: "Primary script unknown" In-Reply-To: References: Message-ID: False alarm, typical. Being stuck on something for a long time and then solving it yourself the second after posting the problem. Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans On Thu, Jan 26, 2017 at 10:05 PM, Chris Wijtmans wrote: > I wish to call a php script when a file is not found which is "gated" > from the rest. Meaning i dont want any php files executing in the > public domain. No user can trigger any php file, only when a requested > file is not found. What am i doing wrong? I tried searching the net a > bit but could not find anything useful. > > root /home/blah/public; > index index.html; > > location / > { > error_page 403 404 = @http_request; > try_files $uri $uri.html $uri/ =404; > } > > location @http_request > { > root /home/blah; > include fastcgi.conf; > fastcgi_pass unix:/var/run/php-fpm/blah.sock; > fastcgi_param SCRIPT_FILENAME /home/blah/http_request.php; > fastcgi_param SCRIPT_NAME /http_request.php; > } > > > 2017/01/26 21:56:42 [error] 19384#19384: *1480 FastCGI sent in stderr: > "Primary script unknown" while r > eading response header from upstream, client: blah, server: > www.blah.*, request: "GET > /test HTTP/2.0", upstream: > "fastcgi://unix:/var/run/php-fpm/blah.sock:", host: "www.blah.nl" From nginx-forum at forum.nginx.org Sat Jan 28 04:02:44 2017 From: nginx-forum at forum.nginx.org (meteor8488) Date: Fri, 27 Jan 2017 23:02:44 -0500 Subject: enable reuseport then only one worker is working? In-Reply-To: <56D6A6C5.1010306@nginx.com> References: <56D6A6C5.1010306@nginx.com> Message-ID: It's almost 1 year since I post the question, for now is there any update for nginx to enable reuseport on freebsd? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,272170#msg-272170 From maxim at nginx.com Sat Jan 28 10:31:16 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Sat, 28 Jan 2017 13:31:16 +0300 Subject: enable reuseport then only one worker is working? In-Reply-To: References: <56D6A6C5.1010306@nginx.com> Message-ID: <012ea5fb-8aaf-bbc8-2431-22260292c1a8@nginx.com> You should approach FreeBSD folks. It still doesn't offer this functionality. On 1/28/17 7:02 AM, meteor8488 wrote: > It's almost 1 year since I post the question, for now is there any update > for nginx to enable reuseport on freebsd? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,264913,272170#msg-272170 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From nginx-forum at forum.nginx.org Sun Jan 29 04:21:19 2017 From: nginx-forum at forum.nginx.org (blason) Date: Sat, 28 Jan 2017 23:21:19 -0500 Subject: Nginx reverse proxy issue -- Plz help Message-ID: <5c9df99764e8a72fe9a5898dd3fd6f17.NginxMailingListEnglish@forum.nginx.org> Hello Guys, I have nginx running as a reverse proxy and this is been running find for other 10 sites however this one site is causing me an issue. I have URL like this. http://abc.xyz.com/EasyPAY/view/LoginMain.aspx And here is the my directive ########### server { listen 80 ; server_name abc.xyz.com ; # index LoginMain.aspx; access_log /var/log/nginx/abc/access.log; error_log /var/log/nginx/abc/error.log; location / { client_max_body_size 10m; client_body_buffer_size 128k; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 128k; proxy_buffers 4 256k; proxy_busy_buffers_size 256k; proxy_temp_file_write_size 256k; proxy_connect_timeout 30s; proxy_pass http://abc.xyz.com/EasyPAY/view/LoginMain.aspx; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } But from the log it seems there are other directives as well which are not loading properly. Can some one pls help here? ************************************* Access.log file xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET / HTTP/1.1" 200 3747 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/System.js?11 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /App_Themes/TemplateMonster/Custom/TabStrip.Custom.css HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /App_Themes/TemplateMonster/Master.css?12 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /App_Themes/TemplateMonster/TemplateMonster.css?16 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/jquery.js?11 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/jqHelper.js?14 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/PageLoader.js?4 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /App_Themes/TemplateMonster/Custom/TabStrip.Custom.css HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /App_Themes/TemplateMonster/Master.css?12 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/img/signInButton.png HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/img/signInButton2.png HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:33 +0530] "GET /EasyPAY/view/img/top.png HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:33 +0530] "GET /EasyPAY/view/img/signInButton.png HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:33 +0530] "GET /EasyPAY/view/img/signInButton2.png HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:33 +0530] "GET /EasyPAY/view/img/LoginHeadBg.png HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" xx.xx.xx.xx - - [29/Jan/2017:09:48:33 +0530] "GET /favicon.ico?2 HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 Safari/537.36" Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272174,272174#msg-272174 From francis at daoine.org Sun Jan 29 18:32:46 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 29 Jan 2017 18:32:46 +0000 Subject: Nginx reverse proxy issue -- Plz help In-Reply-To: <5c9df99764e8a72fe9a5898dd3fd6f17.NginxMailingListEnglish@forum.nginx.org> References: <5c9df99764e8a72fe9a5898dd3fd6f17.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170129183246.GX2958@daoine.org> On Sat, Jan 28, 2017 at 11:21:19PM -0500, blason wrote: Hi there, > I have nginx running as a reverse proxy and this is been running find for > other 10 sites however this one site is causing me an issue. What is different about the configuration of this site, compared to the other 10? That might hint at where to look. > http://abc.xyz.com/EasyPAY/view/LoginMain.aspx My guess is that where you currently have > location / { ... > proxy_pass http://abc.xyz.com/EasyPAY/view/LoginMain.aspx; > } you possibly want instead location / { ... proxy_pass http://abc.xyz.com; } with maybe an extra location = / { return 301 /EasyPAY/view/LoginMain.aspx; } > xx.xx.xx.xx - - [29/Jan/2017:09:48:32 +0530] "GET /EasyPAY/view/System.js?11 > HTTP/1.1" 404 1245 "http://abc.xyz.com/" "Mozilla/5.0 (Windows NT 6.1; > WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.87 > Safari/537.36" That looks like the browser asked for /EasyPAY/view/System.js?11, and nginx said 404. If you check the upstream server logs for what request got to it, you may see evidence of the problematic config. nginx might be asking upstream for something like /EasyPAY/view/LoginMain.aspxEasyPAY/view/System.js?11, which probably causes it to send 404 to nginx. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun Jan 29 21:05:24 2017 From: francis at daoine.org (Francis Daly) Date: Sun, 29 Jan 2017 21:05:24 +0000 Subject: Mail Proxy with xclient - connection lost In-Reply-To: <9a2ab82ee38be5a10196a54401e4a02b.NginxMailingListEnglish@forum.nginx.org> References: <9a2ab82ee38be5a10196a54401e4a02b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170129210524.GY2958@daoine.org> On Thu, Jan 26, 2017 at 02:41:29AM -0500, coolmike wrote: Hi there, > Do I need a special configuration for my exim mailserver when I use xclient? Probably, yes. "xclient" changes the SMTP protocol spoken between the client (nginx) and the server (exim). Unless both ends speak the protocol, confusion will happen. Searching the web for "exim xclient" indicates that local patches for it existed two years ago, and it is not immediately clear to me if they were accepted into mainline. You may have to consult your exim documentation to see if your version can handle it, and if so, how. Possibly the simplest way to check is for you to "telnet" or "netcat" to your exim server:port, and type the EHLO/XCLIENT/HELO commands that nginx would send, and see what happens. (This will make most sense to you if you already know what the SMTP conversation is supposed to look like.) Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jan 30 07:41:06 2017 From: nginx-forum at forum.nginx.org (plrunner) Date: Mon, 30 Jan 2017 02:41:06 -0500 Subject: fail_timeout in upstream not rescpeted? Message-ID: <54b175f1295d739b47074d0eccdd5b72.NginxMailingListEnglish@forum.nginx.org> Hi everybody, I am running nginx v1.11 and I noticed something pretty weird in my error.log. I have fail_timeout=1800s along with max_fails=1 in my upstream and proxy_next_upstream is set to "error timeout", so I expect an upstream host to be taken off the list for 30 minutes just after the first failed connection. Here is what I unexpectedly get in the error.log 2017/01/23 09:49:48 [error] 30676#30676: *2202666 connect() failed (111: Connection refused) while connecting to upstream, client: 93.XX.YYY.228, server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: "cy1.foobar.com", referrer: "https://web.foobar.com/" 2017/01/23 09:49:48 [warn] 30676#30676: *2202666 upstream server temporarily disabled while connecting to upstream, client: 93.XX.YYY.228, server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: "cy1.foobar.com", referrer: "https://web.foobar.com/" 2017/01/23 09:57:53 [error] 30695#30695: *2205681 connect() failed (111: Connection refused) while connecting to upstream, client: 93.XX.YYY.228, server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: "cy1.foobar.com", referrer: "https://web.foobar.com/" 2017/01/23 09:57:53 [warn] 30695#30695: *2205681 upstream server temporarily disabled while connecting to upstream, client: 93.XX.YYY.228, server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: "cy1.foobar.com", referrer: "https://web.foobar.com/" The host is reused after just 8 minutes, instead of 30 minutes. Is there anything wrong in my conf or something I forgot to take into account? Thanks for any help here. Paolo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272189,272189#msg-272189 From ru at nginx.com Mon Jan 30 08:32:52 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 30 Jan 2017 11:32:52 +0300 Subject: fail_timeout in upstream not rescpeted? In-Reply-To: <54b175f1295d739b47074d0eccdd5b72.NginxMailingListEnglish@forum.nginx.org> References: <54b175f1295d739b47074d0eccdd5b72.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170130083252.GA32970@lo0.su> On Mon, Jan 30, 2017 at 02:41:06AM -0500, plrunner wrote: > Hi everybody, > > I am running nginx v1.11 and I noticed something pretty weird in my > error.log. > > I have fail_timeout=1800s along with max_fails=1 in my upstream and > proxy_next_upstream is set to "error timeout", so I expect an upstream host > to be taken off the list for 30 minutes just after the first failed > connection. > > Here is what I unexpectedly get in the error.log > > 2017/01/23 09:49:48 [error] 30676#30676: *2202666 connect() failed (111: > Connection refused) while connecting to upstream, client: 93.XX.YYY.228, > server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", > upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 > host: "cy1.foobar.com", referrer: "https://web.foobar.com/" > 2017/01/23 09:49:48 [warn] 30676#30676: *2202666 upstream server temporarily > disabled while connecting to upstream, client: 93.XX.YYY.228, server: > *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: > "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: > "cy1.foobar.com", referrer: "https://web.foobar.com/" > 2017/01/23 09:57:53 [error] 30695#30695: *2205681 connect() failed (111: > Connection refused) while connecting to upstream, client: 93.XX.YYY.228, > server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", > upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 > host: "cy1.foobar.com", referrer: "https://web.foobar.com/" > 2017/01/23 09:57:53 [warn] 30695#30695: *2205681 upstream server temporarily > disabled while connecting to upstream, client: 93.XX.YYY.228, server: > *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: > "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: > "cy1.foobar.com", referrer: "https://web.foobar.com/" > > The host is reused after just 8 minutes, instead of 30 minutes. > > Is there anything wrong in my conf or something I forgot to take into > account? Without the "zone" directive in the "upstream" block, each worker process has its own view on the state of upstream servers, including "max_fails" and "fail_timeout". From nginx-forum at forum.nginx.org Mon Jan 30 08:41:24 2017 From: nginx-forum at forum.nginx.org (plrunner) Date: Mon, 30 Jan 2017 03:41:24 -0500 Subject: fail_timeout in upstream not rescpeted? In-Reply-To: <20170130083252.GA32970@lo0.su> References: <20170130083252.GA32970@lo0.su> Message-ID: <3feb6c4fb7abf2d59cd98026c87d748e.NginxMailingListEnglish@forum.nginx.org> Thank you very much for the quick reply. OK, it's pretty clear now. I've read the "zone" directive is available since nginx v1.9. For sake of other readers as well, does this mean that what am addressing here is not possible to be solved in previous versions, isn't it? Paolo Posted at Nginx Forum: https://forum.nginx.org/read.php?2,272189,272191#msg-272191 From mdounin at mdounin.ru Mon Jan 30 12:45:59 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 Jan 2017 15:45:59 +0300 Subject: fail_timeout in upstream not rescpeted? In-Reply-To: <54b175f1295d739b47074d0eccdd5b72.NginxMailingListEnglish@forum.nginx.org> References: <54b175f1295d739b47074d0eccdd5b72.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170130124559.GB46625@mdounin.ru> Hello! On Mon, Jan 30, 2017 at 02:41:06AM -0500, plrunner wrote: > Hi everybody, > > I am running nginx v1.11 and I noticed something pretty weird in my > error.log. > > I have fail_timeout=1800s along with max_fails=1 in my upstream and > proxy_next_upstream is set to "error timeout", so I expect an upstream host > to be taken off the list for 30 minutes just after the first failed > connection. > > Here is what I unexpectedly get in the error.log > > 2017/01/23 09:49:48 [error] 30676#30676: *2202666 connect() failed (111: > Connection refused) while connecting to upstream, client: 93.XX.YYY.228, > server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", > upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 > host: "cy1.foobar.com", referrer: "https://web.foobar.com/" > 2017/01/23 09:49:48 [warn] 30676#30676: *2202666 upstream server temporarily > disabled while connecting to upstream, client: 93.XX.YYY.228, server: > *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: > "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: > "cy1.foobar.com", referrer: "https://web.foobar.com/" > 2017/01/23 09:57:53 [error] 30695#30695: *2205681 connect() failed (111: > Connection refused) while connecting to upstream, client: 93.XX.YYY.228, > server: *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", > upstream: "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 > host: "cy1.foobar.com", referrer: "https://web.foobar.com/" > 2017/01/23 09:57:53 [warn] 30695#30695: *2205681 upstream server temporarily > disabled while connecting to upstream, client: 93.XX.YYY.228, server: > *.foobar.com, request: "GET /generic/api/v1/tag/1006 HTTP/2.0", upstream: > "http://[beaf:beaf:1001:a001::003D:4]:8080/generic/api/v1/tag/1006 host: > "cy1.foobar.com", referrer: "https://web.foobar.com/" > > The host is reused after just 8 minutes, instead of 30 minutes. > > Is there anything wrong in my conf or something I forgot to take into > account? As can be seen from "30676#" and "30695#", these messages are from different worker processes. By default each worker process uses its own run-time state for the upstream servers. If you want worker processes to use shared state, you can configure this using the "zone" directive in the "upstream" block, see details here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#zone -- Maxim Dounin http://nginx.org/ From ru at nginx.com Mon Jan 30 12:55:45 2017 From: ru at nginx.com (Ruslan Ermilov) Date: Mon, 30 Jan 2017 15:55:45 +0300 Subject: fail_timeout in upstream not rescpeted? In-Reply-To: <3feb6c4fb7abf2d59cd98026c87d748e.NginxMailingListEnglish@forum.nginx.org> References: <20170130083252.GA32970@lo0.su> <3feb6c4fb7abf2d59cd98026c87d748e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170130125545.GA74401@lo0.su> On Mon, Jan 30, 2017 at 03:41:24AM -0500, plrunner wrote: > Thank you very much for the quick reply. > > OK, it's pretty clear now. > > I've read the "zone" directive is available since nginx v1.9. > For sake of other readers as well, does this mean that what am addressing > here is not possible to be solved in previous versions, isn't it? True. From nmilas at noa.gr Tue Jan 31 06:43:23 2017 From: nmilas at noa.gr (Nikolaos Milas) Date: Tue, 31 Jan 2017 08:43:23 +0200 Subject: php not working from aliased subdir In-Reply-To: <20160119205814.GT19381@daoine.org> References: <569E002A.5090909@noa.gr> <20160119205814.GT19381@daoine.org> Message-ID: On 19/1/2016 10:58 ??, Francis Daly wrote: > Good luck with it, Thank you Francis and everyone for your feedback. I have tried various things with the aliased directories, but I am still having the problem. I hope someone can help me with my current config, which I attach, together with the error_log (debugging level) which contains the output of the request: http://www-xxx.noa.gr/museum/library/ This request outputs a "File not found." message to the browser. My logic is to use a scheme like (see full details in the attached config): server { ... location ^~ / { allow all; location ~ \.php$ { ... } } location ^~ /administrator/ { allow 10.201.0.0/16; deny all; location ~ \.php$ { ... } } location ^~ /museum/library/opac/ { alias /var/webs/wwwopenbib/opac/; allow all; location ~ \.php$ { ... } } location ^~ /museum/library/ { alias /var/webs/wwwopenbib/; allow 10.201.0.0/16; deny all; location ~ \.php$ { ... } } location ^~ /museum/ { alias /var/webs/wwwmuseum/; allow all; } } But it doesn't work for the aliased dirs. The request in question (http://www-xxx.noa.gr/museum/library/) should access the file: http://www-xxx.noa.gr/museum/library/index.php (i.e. file: /var/webs/wwwopenbib/index.php) whose content is: and should lead the request to: http://www-xxx.noa.gr/museum/library/home/index.php But it doesn't happen that way. Please advise on how to resolve the situation! (Any additional advice on my current config will be greatly appreciated!) Thanks a lot, Nick -------------- next part -------------- server { listen [::]:80; server_name www-xxx.noa.gr; access_log /var/webs/wwwnoa32/log/access_log main; error_log /var/webs/wwwnoa32/log/error_log debug; root /var/webs/wwwnoa32/www/; index index.php index.html index.htm index.cgi default.html default.htm default.php; location ^~ / { try_files $uri $uri/ /index.php?$args; allow all; location ~ \.php$ { # Setup var defaults set $no_cache ""; # If non GET/HEAD, don't cache & mark user as uncacheable for 1 second via cookie if ($request_method !~ ^(GET|HEAD)$) { set $no_cache "1"; } # Drop no cache cookie if need be # (for some reason, add_header fails if included in prior if-block) if ($no_cache = "1") { add_header Set-Cookie "_mcnc=1; Max-Age=2; Path=/"; add_header X-Microcachable "0"; } # Bypass cache if no-cache cookie is set if ($http_cookie ~* "_mcnc") { set $no_cache "1"; } # Bypass cache if flag is set fastcgi_no_cache $no_cache; fastcgi_cache_bypass $no_cache; fastcgi_cache microcache; fastcgi_cache_key $scheme$host$request_uri$request_method; fastcgi_cache_valid 200 301 302 303 502 5s; fastcgi_cache_use_stale updating error timeout invalid_header http_500; fastcgi_pass_header Set-Cookie; fastcgi_pass_header Cookie; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_buffer_size 384k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 384k; fastcgi_temp_file_write_size 384k; fastcgi_read_timeout 240; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } location ~* /(images|cache|media|logs|tmp)/.*\.(php|php3|php4|php5|php6|pl|py|jsp|asp|sh|cgi)$ { return 403; } location ~ /\.ht { deny all; } location ^~ /administrator/ { allow 10.201.0.0/16; deny all; location ~ \.php$ { fastcgi_cache off; try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_param PATH_INFO $fastcgi_script_name; fastcgi_intercept_errors on; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; fastcgi_read_timeout 240; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } location ^~ /errlog/ { alias /var/webs/wwwnoa32/log/; autoindex on; allow 10.201.0.0/16; deny all; } location ^~ /museum/library/opac/ { alias /var/webs/wwwopenbib/opac/; allow all; location ~ \.php$ { fastcgi_param SCRIPT_FILENAME /var/webs/wwwopenbib/opac/$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } location ^~ /museum/library/images/ { alias /var/webs/wwwopenbib/images/; allow all; } location ^~ /museum/library/shared/ { alias /var/webs/wwwopenbib/shared/; allow all; } location ^~ /museum/library/ { alias /var/webs/wwwopenbib/; allow 10.201.0.0/16; deny all; location ~ \.php$ { fastcgi_param SCRIPT_FILENAME /var/webs/wwwopenbib$fastcgi_script_name; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_pass unix:/tmp/php-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } } location ^~ /museum/ { alias /var/webs/wwwmuseum/; allow all; } } -------------- next part -------------- 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Host: www-xxx.noa.gr" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:51.0) Gecko/20100101 Firefox/51.0" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Accept-Language: en-US,en;q=0.5" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Accept-Encoding: gzip, deflate" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Cookie: _ga=GA1.2.257792966.1478507802; 73da122b55189f0c507421af4a9ba97a=el-GR" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "DNT: 1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Connection: keep-alive" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Upgrade-Insecure-Requests: 1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header done 2017/01/31 08:17:24 [debug] 15750#15750: *149615 event timer del: 14: 1485843504368 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 rewrite phase: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "errlog/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "museum/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "library/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: ~ "\.php$" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 using configuration "/museum/library/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http cl:-1 max:1048576 2017/01/31 08:17:24 [debug] 15750#15750: *149615 rewrite phase: 3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post rewrite phase: 4 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 5 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 6 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 7 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 8 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access: A4CAFBC3 FFFFFFFF 0100007F 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access: A4CAFBC3 00FFFFFF 00CAFBC3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 9 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 10 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post access phase: 11 2017/01/31 08:17:24 [debug] 15750#15750: *149615 try files phase: 12 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 13 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 14 2017/01/31 08:17:24 [debug] 15750#15750: *149615 open index "/var/webs/wwwopenbib/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 internal redirect: "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 rewrite phase: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "errlog/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "museum/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "library/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "opac/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "images/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: ~ "\.php$" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 using configuration "\.php$" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http cl:-1 max:1048576 2017/01/31 08:17:24 [debug] 15750#15750: *149615 rewrite phase: 3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post rewrite phase: 4 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 5 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 6 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 7 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 8 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access: A4CAFBC3 FFFFFFFF 0100007F 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access: A4CAFBC3 00FFFFFF 00CAFBC3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 9 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 10 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post access phase: 11 2017/01/31 08:17:24 [debug] 15750#15750: *149615 try files phase: 12 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http init upstream, client timer: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 epoll add event: fd:14 op:3 ev:80002005 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SCRIPT_FILENAME" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "/var/webs/wwwopenbib" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SCRIPT_FILENAME: /var/webs/wwwopenbib/museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "QUERY_STRING" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "QUERY_STRING: " 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REQUEST_METHOD" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "GET" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REQUEST_METHOD: GET" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "CONTENT_TYPE" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "CONTENT_TYPE: " 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "CONTENT_LENGTH" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "CONTENT_LENGTH: " 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "QUERY_STRING" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "QUERY_STRING: " 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REQUEST_METHOD" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "GET" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REQUEST_METHOD: GET" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "CONTENT_TYPE" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "CONTENT_TYPE: " 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "CONTENT_LENGTH" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "CONTENT_LENGTH: " 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SCRIPT_NAME" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SCRIPT_NAME: /museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REQUEST_URI" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/museum/library/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REQUEST_URI: /museum/library/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "DOCUMENT_URI" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "DOCUMENT_URI: /museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "DOCUMENT_ROOT" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/var/webs/wwwopenbib/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "DOCUMENT_ROOT: /var/webs/wwwopenbib/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SERVER_PROTOCOL" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "HTTP/1.1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SERVER_PROTOCOL: HTTP/1.1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REQUEST_SCHEME" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "http" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REQUEST_SCHEME: http" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "GATEWAY_INTERFACE" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "CGI/1.1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "GATEWAY_INTERFACE: CGI/1.1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SERVER_SOFTWARE" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "nginx/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "1.10.2" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SERVER_SOFTWARE: nginx/1.10.2" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REMOTE_ADDR" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "::ffff:195.251.202.164" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REMOTE_ADDR: ::ffff:195.251.202.164" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REMOTE_PORT" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "56856" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REMOTE_PORT: 56856" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SERVER_ADDR" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "::ffff:83.212.5.29" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SERVER_ADDR: ::ffff:83.212.5.29" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SERVER_PORT" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "80" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SERVER_PORT: 80" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SERVER_NAME" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "www-xxx.noa.gr" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SERVER_NAME: www-xxx.noa.gr" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "REDIRECT_STATUS" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "200" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "REDIRECT_STATUS: 200" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_HOST: www-xxx.noa.gr" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_USER_AGENT: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:51.0) Gecko/20100101 Firefox/51.0" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_ACCEPT: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_ACCEPT_LANGUAGE: en-US,en;q=0.5" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_ACCEPT_ENCODING: gzip, deflate" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_COOKIE: _ga=GA1.2.257792966.1478507802; 73da122b55189f0c507421af4a9ba97a=el-GR" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_DNT: 1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_CONNECTION: keep-alive" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "HTTP_UPGRADE_INSECURE_REQUESTS: 1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http cleanup add: 00000000016C61D8 2017/01/31 08:17:24 [debug] 15750#15750: *149615 get rr peer, try: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 stream socket 34 2017/01/31 08:17:24 [debug] 15750#15750: *149615 epoll add connection: fd:34 ev:80002005 2017/01/31 08:17:24 [debug] 15750#15750: *149615 connect to unix:/tmp/php-fpm.sock, fd:34 #149616 2017/01/31 08:17:24 [debug] 15750#15750: *149615 connected 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream connect: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 posix_memalign: 000000000165E9E0:128 @16 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream send request 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream send request body 2017/01/31 08:17:24 [debug] 15750#15750: *149615 chain writer buf fl:0 s:1040 2017/01/31 08:17:24 [debug] 15750#15750: *149615 chain writer in: 00000000016C6210 2017/01/31 08:17:24 [debug] 15750#15750: *149615 writev: 1040 of 1040 2017/01/31 08:17:24 [debug] 15750#15750: *149615 chain writer out: 0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 event timer add: 34: 60000:1485843504375 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http finalize request: -4, "/museum/library/index.php?" a:1, c:3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http request count:3 blk:0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http finalize request: -4, "/museum/library/index.php?" a:1, c:2 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http request count:2 blk:0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACC991F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACD5A970 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACC99970 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACC991F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http run request: "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream check client, write event:1, "/museum/library/index.php" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream recv(): -1 (11: Resource temporarily unavailable) 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACD5A970 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream request: "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream process header 2017/01/31 08:17:24 [debug] 15750#15750: *149615 malloc: 0000000001683B10:4096 2017/01/31 08:17:24 [debug] 15750#15750: *149615 recv: fd:34 168 of 4096 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 07 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 17 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record length: 23 2017/01/31 08:17:24 [error] 15750#15750: *149615 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ::ffff:195.251.202.164, server: www-xxx.noa.gr, request: "GET /museum/library/ HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", host: "www-xxx.noa.gr" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 06 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 6B 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 05 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record length: 107 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi parser: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header: "Status: 404 Not Found" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi parser: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header: "X-Powered-By: PHP/5.6.22" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi parser: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header: "Content-type: text/html; charset=UTF-8" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi parser: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header done 2017/01/31 08:17:24 [debug] 15750#15750: *149615 posix_memalign: 0000000001684B20:4096 @16 2017/01/31 08:17:24 [debug] 15750#15750: *149615 HTTP/1.1 404 Not Found Server: nginx Date: Tue, 31 Jan 2017 06:17:24 GMT Content-Type: text/html; charset=UTF-8 Transfer-Encoding: chunked Connection: keep-alive Vary: Accept-Encoding X-Powered-By: PHP/5.6.22 Content-Encoding: gzip 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:1 f:0 0000000001684C48, pos 0000000001684C48, size: 243 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter: l:0 f:0 s:243 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http cacheable: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream process upstream 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe read upstream: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe preread: 37 2017/01/31 08:17:24 [debug] 15750#15750: *149615 readv: 1, last:3928 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe recv chain: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe buf free s:0 t:1 f:0 0000000001683B10, pos 0000000001683B93, size: 37 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe length: -1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 input buf #0 0000000001683B93 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 03 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 01 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 08 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record byte: 00 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi record length: 8 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi sent end request 2017/01/31 08:17:24 [debug] 15750#15750: *149615 input buf 0000000001683B93 16 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe write downstream: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe write downstream flush in 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http output filter "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http copy filter: "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http postpone filter "/museum/library/index.php?" 00000000016C64A0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http gzip filter 2017/01/31 08:17:24 [debug] 15750#15750: *149615 malloc: 00000000017609E0:270336 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip alloc: n:1 s:5928 a:8192 p:00000000017609E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip alloc: n:32768 s:2 a:65536 p:00000000017629E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip alloc: n:32768 s:2 a:65536 p:00000000017729E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip alloc: n:32768 s:2 a:65536 p:00000000017829E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip alloc: n:16384 s:4 a:65536 p:00000000017929E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in: 00000000016C64C0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in_buf:0000000001684E58 ni:0000000001683B93 ai:16 2017/01/31 08:17:24 [debug] 15750#15750: *149615 malloc: 0000000001713740:8192 2017/01/31 08:17:24 [debug] 15750#15750: *149615 deflate in: ni:0000000001683B93 no:0000000001713740 ai:16 ao:8192 fl:0 redo:0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 deflate out: ni:0000000001683BA3 no:0000000001713740 ai:0 ao:8192 rc:0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in_buf:0000000001684E58 pos:0000000001683B93 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in: 0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http copy filter: 0 "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 pipe write downstream done 2017/01/31 08:17:24 [debug] 15750#15750: *149615 event timer: 34, old: 1485843504375, new: 1485843504377 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream exit: 0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 finalize http upstream request: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 finalize http fastcgi request 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free rr peer 1 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 close http upstream connection: 34 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 000000000165E9E0, unused: 48 2017/01/31 08:17:24 [debug] 15750#15750: *149615 event timer del: 34: 1485843504375 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACC99970 2017/01/31 08:17:24 [debug] 15750#15750: *149615 reusable connection: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http upstream temp fd: -1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http output filter "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http copy filter: "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http postpone filter "/museum/library/index.php?" 00007FFC5631E660 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http gzip filter 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in: 00000000016C64E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in_buf:0000000001684F78 ni:0000000000000000 ai:0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 deflate in: ni:0000000000000000 no:0000000001713740 ai:0 ao:8192 fl:4 redo:0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 deflate out: ni:0000000000000000 no:0000000001713752 ai:0 ao:8174 rc:1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 gzip in_buf:0000000001684F78 pos:0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000017609E0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http chunk: 10 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http chunk: 26 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write old buf t:1 f:0 0000000001684C48, pos 0000000001684C48, size: 243 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:1 f:0 00000000016850B8, pos 00000000016850B8, size: 4 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:0 f:0 0000000000000000, pos 0000000000704868, size: 10 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:1 f:0 0000000001713740, pos 0000000001713740, size: 26 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:0 f:0 0000000000000000, pos 00000000004D5259, size: 7 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter: l:1 f:1 s:290 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter limit 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 writev: 290 of 290 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter 0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http copy filter: 0 "/museum/library/index.php?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http finalize request: 0, "/museum/library/index.php?" a:1, c:1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 set http keepalive handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http close request 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http log handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 0000000001713740 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 0000000001683B10 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C44E0, unused: 3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C54F0, unused: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 0000000001684B20, unused: 2262 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C6B20 2017/01/31 08:17:24 [debug] 15750#15750: *149615 hc free: 0000000000000000 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 hc busy: 0000000000000000 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 tcp_nodelay 2017/01/31 08:17:24 [debug] 15750#15750: *149615 reusable connection: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 event timer add: 14: 20000:1485843464377 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACD5A1F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACD5A1F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http keepalive handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 malloc: 00000000016C6B20:1024 2017/01/31 08:17:24 [debug] 15750#15750: *149615 recv: fd:14 -1 of 1024 2017/01/31 08:17:24 [debug] 15750#15750: *149615 recv() not ready (11: Resource temporarily unavailable) 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C6B20 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACD5A1F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACC991F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACD5A1F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http keepalive handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 malloc: 00000000016C6B20:1024 2017/01/31 08:17:24 [debug] 15750#15750: *149615 recv: fd:14 332 of 1024 2017/01/31 08:17:24 [debug] 15750#15750: *149615 reusable connection: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 posix_memalign: 00000000016C44E0:4096 @16 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Host: www-xxx.noa.gr" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "User-Agent: Mozilla/5.0 (Windows NT 6.3; Win64; x64; rv:51.0) Gecko/20100101 Firefox/51.0" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Accept: */*" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Accept-Language: en-US,en;q=0.5" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Accept-Encoding: gzip, deflate" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Cookie: _ga=GA1.2.257792966.1478507802; 73da122b55189f0c507421af4a9ba97a=el-GR" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "DNT: 1" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header: "Connection: keep-alive" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http header done 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 rewrite phase: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "errlog/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: "museum/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 test location: ~ "\.php$" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 using configuration "/" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http cl:-1 max:1048576 2017/01/31 08:17:24 [debug] 15750#15750: *149615 rewrite phase: 3 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post rewrite phase: 4 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 5 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 6 2017/01/31 08:17:24 [debug] 15750#15750: *149615 generic phase: 7 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 8 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access: A4CAFBC3 00000000 00000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 9 2017/01/31 08:17:24 [debug] 15750#15750: *149615 access phase: 10 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post access phase: 11 2017/01/31 08:17:24 [debug] 15750#15750: *149615 try files phase: 12 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/favicon.ico" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 trying to use file: "/favicon.ico" "/var/webs/wwwnoa32/www/favicon.ico" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 try file uri: "/favicon.ico" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 13 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 14 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 15 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 16 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 17 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http filename: "/var/webs/wwwnoa32/www/favicon.ico.gz" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 add cleanup: 00000000016C52F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 content phase: 18 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http filename: "/var/webs/wwwnoa32/www/favicon.ico" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 add cleanup: 00000000016C5348 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http static fd: 34 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http set discard body 2017/01/31 08:17:24 [debug] 15750#15750: *149615 HTTP/1.1 200 OK Server: nginx Date: Tue, 31 Jan 2017 06:17:24 GMT Content-Type: image/x-icon Content-Length: 2238 Last-Modified: Tue, 24 Sep 2013 12:24:17 GMT Connection: keep-alive ETag: "52418471-8be" Accept-Ranges: bytes 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:1 f:0 00000000016C5930, pos 00000000016C5930, size: 235 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter: l:0 f:0 s:235 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http output filter "/favicon.ico?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http copy filter: "/favicon.ico?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http postpone filter "/favicon.ico?" 00007FFC5631E670 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write old buf t:1 f:0 00000000016C5930, pos 00000000016C5930, size: 235 file: 0, size: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 write new buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 0, size: 2238 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter: l:1 f:0 s:2473 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter limit 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 no tcp_nodelay 2017/01/31 08:17:24 [debug] 15750#15750: *149615 tcp_nopush 2017/01/31 08:17:24 [debug] 15750#15750: *149615 writev: 235 of 235 2017/01/31 08:17:24 [debug] 15750#15750: *149615 sendfile: @0 2238 2017/01/31 08:17:24 [debug] 15750#15750: *149615 sendfile: 2238 of 2238 @0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http write filter 0000000000000000 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http copy filter: 0 "/favicon.ico?" 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http finalize request: 0, "/favicon.ico?" a:1, c:1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 set http keepalive handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http close request 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http log handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 run cleanup: 00000000016C5348 2017/01/31 08:17:24 [debug] 15750#15750: *149615 file cleanup: fd:34 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C44E0, unused: 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C54F0, unused: 2378 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C6B20 2017/01/31 08:17:24 [debug] 15750#15750: *149615 hc free: 0000000000000000 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 hc busy: 0000000000000000 0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 reusable connection: 1 2017/01/31 08:17:24 [debug] 15750#15750: *149615 event timer add: 14: 20000:1485843464493 2017/01/31 08:17:24 [debug] 15750#15750: *149615 post event 00007F1AACD5A1F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACC991F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http empty handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 delete posted event 00007F1AACD5A1F0 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http keepalive handler 2017/01/31 08:17:24 [debug] 15750#15750: *149615 malloc: 00000000016C6B20:1024 2017/01/31 08:17:24 [debug] 15750#15750: *149615 recv: fd:14 -1 of 1024 2017/01/31 08:17:24 [debug] 15750#15750: *149615 recv() not ready (11: Resource temporarily unavailable) 2017/01/31 08:17:24 [debug] 15750#15750: *149615 free: 00000000016C6B20 2017/01/31 08:17:44 [debug] 15750#15750: *149615 event timer del: 14: 1485843464493 2017/01/31 08:17:44 [debug] 15750#15750: *149615 http keepalive handler 2017/01/31 08:17:44 [debug] 15750#15750: *149615 close http connection: 14 2017/01/31 08:17:44 [debug] 15750#15750: *149615 reusable connection: 0 2017/01/31 08:17:44 [debug] 15750#15750: *149615 free: 0000000000000000 2017/01/31 08:17:44 [debug] 15750#15750: *149615 free: 00000000016A9B70, unused: 52 From mdounin at mdounin.ru Tue Jan 31 15:12:44 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 Jan 2017 18:12:44 +0300 Subject: nginx-1.10.3 Message-ID: <20170131151244.GF46625@mdounin.ru> Changes with nginx 1.10.3 31 Jan 2017 *) Bugfix: in the "add_after_body" directive when used with the "sub_filter" directive. *) Bugfix: unix domain listen sockets might not be inherited during binary upgrade on Linux. *) Bugfix: graceful shutdown of old worker processes might require infinite time when using HTTP/2. *) Bugfix: when using HTTP/2 and the "limit_req" or "auth_request" directives client request body might be corrupted; the bug had appeared in 1.10.2. *) Bugfix: a segmentation fault might occur in a worker process when using HTTP/2; the bug had appeared in 1.10.2. *) Bugfix: an incorrect response might be returned when using the "sendfile" directive on FreeBSD and macOS; the bug had appeared in 1.7.8. *) Bugfix: a truncated response might be stored in cache when using the "aio_write" directive. *) Bugfix: a socket leak might occur when using the "aio_write" directive. -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Tue Jan 31 15:19:59 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 31 Jan 2017 16:19:59 +0100 Subject: HTTP downloads randomly get stuck until client timeout Message-ID: Hi all, I'm experiencing odd behavior with some larger HTTP file downloads from my site. The files will download for a seemingly random amount of bytes then the connection freezes until "send_timeout" expires, at which point the error log shows "client timed out (110: Connection timed out) while sending response to client". A tcpdump shows both ends successfully passing packets with no packet loss. nginx is pretty stock from the nginx.org repository, no 3rd party modules or complex options, mostly static files and fastcgi / PHP. The behavior is very intermittent, but happens regardless of client browser / IP / requested file. I was able to captured a debug log when this happened, it is available at https://hastebin.com/tevusuhobe.m (some rewrite details and variables have been omitted) # nginx -V nginx version: nginx/1.10.2 built by gcc 4.9.2 (Debian 4.9.2-10) built with OpenSSL 1.0.1t 3 May 2016 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-file-aio --with-threads --with-ipv6 --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_ssl_module --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,-z,now -Wl,--as-needed' sendfile, tcp_nopush, tcp_nodelay are enabled. accept_mutex, aio is disabled. Linux karak 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux I've also tried mainline nginx/1.11.9 from the nginx.org repository and the problem persisted. Any advice on what I should be looking at to resolve this would be very welcome! Regards, Richard. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jan 31 15:32:18 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 Jan 2017 18:32:18 +0300 Subject: HTTP downloads randomly get stuck until client timeout In-Reply-To: References: Message-ID: <20170131153218.GJ46625@mdounin.ru> Hello! On Tue, Jan 31, 2017 at 04:19:59PM +0100, Richard Stanway wrote: > Hi all, > I'm experiencing odd behavior with some larger HTTP file downloads from my > site. The files will download for a seemingly random amount of bytes then > the connection freezes until "send_timeout" expires, at which point the > error log shows "client timed out (110: Connection timed out) while sending > response to client". > > A tcpdump shows both ends successfully passing packets with no packet loss. > nginx is pretty stock from the nginx.org repository, no 3rd party modules > or complex options, mostly static files and fastcgi / PHP. The behavior is > very intermittent, but happens regardless of client browser / IP / > requested file. > > I was able to captured a debug log when this happened, it is available at > https://hastebin.com/tevusuhobe.m (some rewrite details and variables have > been omitted) [...] > sendfile, tcp_nopush, tcp_nodelay are enabled. > accept_mutex, aio is disabled. > > Linux karak 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) > x86_64 GNU/Linux > > I've also tried mainline nginx/1.11.9 from the nginx.org repository and the > problem persisted. Any advice on what I should be looking at to resolve > this would be very welcome! This sounds similar to https://trac.nginx.org/nginx/ticket/1174 (though the kernel looks old enough, but there may by backports). Are you using timer_resolution? -- Maxim Dounin http://nginx.org/ From r1ch+nginx at teamliquid.net Tue Jan 31 15:42:13 2017 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 31 Jan 2017 16:42:13 +0100 Subject: HTTP downloads randomly get stuck until client timeout In-Reply-To: <20170131153218.GJ46625@mdounin.ru> References: <20170131153218.GJ46625@mdounin.ru> Message-ID: That ticket does look like the same issue I'm having as I was indeed using timer_resolution. The problem seems to have coincided with a recent Debian kernel update, so it's possible they backported those changes. I have removed timer_resolution from my config and so far have been unable to reproduce the issue. Thanks for the quick reply! Regards, Richard. On Tue, Jan 31, 2017 at 4:32 PM, Maxim Dounin wrote: > Hello! > > On Tue, Jan 31, 2017 at 04:19:59PM +0100, Richard Stanway wrote: > > > Hi all, > > I'm experiencing odd behavior with some larger HTTP file downloads from > my > > site. The files will download for a seemingly random amount of bytes then > > the connection freezes until "send_timeout" expires, at which point the > > error log shows "client timed out (110: Connection timed out) while > sending > > response to client". > > > > A tcpdump shows both ends successfully passing packets with no packet > loss. > > nginx is pretty stock from the nginx.org repository, no 3rd party > modules > > or complex options, mostly static files and fastcgi / PHP. The behavior > is > > very intermittent, but happens regardless of client browser / IP / > > requested file. > > > > I was able to captured a debug log when this happened, it is available at > > https://hastebin.com/tevusuhobe.m (some rewrite details and variables > have > > been omitted) > > [...] > > > sendfile, tcp_nopush, tcp_nodelay are enabled. > > accept_mutex, aio is disabled. > > > > Linux karak 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) > > x86_64 GNU/Linux > > > > I've also tried mainline nginx/1.11.9 from the nginx.org repository and > the > > problem persisted. Any advice on what I should be looking at to resolve > > this would be very welcome! > > This sounds similar to https://trac.nginx.org/nginx/ticket/1174 > (though the kernel looks old enough, but there may by backports). > Are you using timer_resolution? > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jan 31 20:13:35 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 31 Jan 2017 20:13:35 +0000 Subject: php not working from aliased subdir In-Reply-To: References: <569E002A.5090909@noa.gr> <20160119205814.GT19381@daoine.org> Message-ID: <20170131201335.GZ2958@daoine.org> On Tue, Jan 31, 2017 at 08:43:23AM +0200, Nikolaos Milas wrote: > On 19/1/2016 10:58 ??, Francis Daly wrote: Hi there, > I have tried various things with the aliased directories, but I am > still having the problem. > My logic is to use a scheme like (see full details in the attached config): The logic skeleton looks right to me, for what that's worth. > But it doesn't work for the aliased dirs. > > The request in question (http://www-xxx.noa.gr/museum/library/) > should access the file: > > http://www-xxx.noa.gr/museum/library/index.php > > (i.e. file: /var/webs/wwwopenbib/index.php) whose content is: > location ^~ /museum/library/ { > alias /var/webs/wwwopenbib/; > allow 10.201.0.0/16; > deny all; > > location ~ \.php$ { > > fastcgi_param SCRIPT_FILENAME /var/webs/wwwopenbib$fastcgi_script_name; I suspect that changing that line will make everything work. $fastcgi_script_name is something like /museum/library/index.php, so SCRIPT_FILENAME ends up with a value that is not exactly the name of the file that you want. Replace the line with fastcgi_param SCRIPT_FILENAME $request_filename; and reload. The debug log does show what is going on, if you are able to ignore the extra lines. First, for the request /museum/library/ : > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 using configuration "/museum/library/" it chooses the location you want... > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 internal redirect: "/museum/library/index.php?" and issues the (internal) subrequest. This request also chooses the location you want: > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 using configuration "\.php$" (although that is not immediately obvious, unless you know what to expect). And then it prepares the parameters to send to the fastcgi server: > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "SCRIPT_FILENAME" > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script copy: "/var/webs/wwwopenbib" > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http script var: "/museum/library/index.php" > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 fastcgi param: "SCRIPT_FILENAME: /var/webs/wwwopenbib/museum/library/index.php" That is the filename that your fastcgi server is going to try (and fail) to open. > 2017/01/31 08:17:24 [error] 15750#15750: *149615 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ::ffff:195.251.202.164, server: www-xxx.noa.gr, request: "GET /museum/library/ HTTP/1.1", upstream: "fastcgi://unix:/tmp/php-fpm.sock:", host: "www-xxx.noa.gr" And there it is reporting failure. > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header: "Status: 404 Not Found" > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header: "X-Powered-By: PHP/5.6.22" > 2017/01/31 08:17:24 [debug] 15750#15750: *149615 http fastcgi header: "Content-type: text/html; charset=UTF-8" And that is what it sent back to nginx as the response. Compare http://nginx.org/r/$fastcgi_script_name with http://nginx.org/r/$request_filename to see why you probably want the latter in each case you use "alias". Cheers, f -- Francis Daly francis at daoine.org