From nginx-forum at forum.nginx.org Mon May 1 02:12:56 2017 From: nginx-forum at forum.nginx.org (t.nishiyori) Date: Sun, 30 Apr 2017 22:12:56 -0400 Subject: How can I set a maximum limit for gzip module? In-Reply-To: References: Message-ID: <9d50f84449a199ec08b829cd8c3170a7.NginxMailingListEnglish@forum.nginx.org> Hi tokers, I understand gzip-module that if buffers reach the limit, the module will wait some buffers are free. I think there is a new question the resource (such like cpu and memory) will be over, when upstream contents are large size and lots of traffic. I cannot set the limit of size of upstream contents. Because my nginx proxy is handling a lot of traffic from a third party. My nginx server which turn off the gzip-module was so fine in almost two years. So I think there is no problem, but I am a little scared. If I encounter this problem unfortunately, please ask you such like "Why dose't gzip-module have a max_length directive" again. Thank you so much. tokers Wrote: ------------------------------------------------------- > Hi t.nishiyori, > > Don?t worry, just like Ermilov said, when you enable the gzip module > and > nginx gets a part of response body, gzip module will try to compress > the > data, and if the buffers reach the limit, gzip module just send these > compressed data firstly(ngx_http_gzip_body_filter), when processing by > chunk module, they evolved a ?chunk?. After some buffers is free, gzip > will > continue the work. > > > On 28 April 2017 at 16:56:06, t.nishiyori > (nginx-forum at forum.nginx.org) > wrote: > > Hi tokers, > > > nginx compresses the body by ?chunk? > Thanks for explanation about gzip-module. > So, there were no errors when upstream response size exceeds the > gzip_buffers_size. > But I think a lot of large responses exceed the prepared buffers, some > day. > Is there any possibility of such a problem? > > Thank you. > > > tokers Wrote: > ------------------------------------------------------- > > There is no any directive like ?gzip_max_length? so far. > > By the way, nginx compresses the body by ?chunk?, so one ?chunk? is > an > > independtent compressed data. > > > > > > On 27 April 2017 at 13:27:42, t.nishiyori > > (nginx-forum at forum.nginx.org) > > wrote: > > > > Hello, > > > > I'm using nginx-1.11.2 for proxy server with gzip-module. > > > > I hope to use such like a "gzip_max_length" directive in > > ngx_http_gzip_module. > > Because some upstream response's sizes exceeded the settings of > > gzip_buffers. > > (But there were no error... These are strange things for me...) > > > > I can change the gzip_buffers to enough size for upstream, but there > > is no > > limit. > > > > Can I set a limit of maximum content-size for gzip module? > > > > Thank you. > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,273899,273899#msg-273899 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,273899,273920#msg-273920 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273899,274000#msg-274000 From francis at daoine.org Mon May 1 11:18:49 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 1 May 2017 12:18:49 +0100 Subject: Serve index.html file if exists try_files + proxy_pass? In-Reply-To: <8B8660DA-E717-46D8-B7EC-016800715842@lucasrolff.com> References: <8B8660DA-E717-46D8-B7EC-016800715842@lucasrolff.com> Message-ID: <20170501111849.GS10157@daoine.org> On Sun, Apr 30, 2017 at 10:44:21AM +0000, Lucas Rolff wrote: Hi there, > I have a small scenario where I have a backend (s3 compatible storage), which by default generates a directory listing overview of the files stored. > I want to be able to serve an "index.html" file if the file exists, else just proxy_pass as normally. I think that it will be very useful to be utterly clear on the distinction between a file and a url here. If you can describe what you want to happen in exact terms, there is a much better chance that the configuration you want will be clear. A file is a thing available on the local nginx filesystem. Its full name will be some thing like /usr/local/nginx/html/one/two. A url is a thing available by proxy_pass:ing to a http server. (That's good enough for these purposes.) Its full name will be something like http://upstream/one/two. (The http server on upstream may have a direct mapping between urls it receives and files it knows about; that's because those files are on upstream's local filesystem. Similarly, nginx receives requests which are urls, and it may map them to files or to other urls. This can get confusing. That's why it is useful to be explicit.) > https://gist.github.com/lucasRolff/c7ea13305e9bff40eb6729246cd7eb39 > > My nginx config for somewhat reason doesn't work ? or maybe it's because I misunderstand how try_files actually work. try_files checks for the existence of a file. In the common case, the full name of the file that it checks is the concatenation of $document_root with the argument to try_files. > So I have URLs such as: > > minio.box.com/bucket1/ > minio.box.com/bucket43253/ > > > When I request these URL's I want nginx to check if index.html exists in the directory (it's an actual file on the filesystem) - if it does, serve this one, else go to @minio location. Can you be specific here, with a worked example? The request to nginx is for /one/two/. What do you want nginx to do? (If you mention the word "file", please use the full name of the file that you are interested in.) Then, a separate request to nginx is for /one/three. Same question. > For any other file within the directory, I will just go to @minio location so if I request unicorn.png it should go in @minio location as well. > > Is there any decent (non-evil) way of doing this? > > I assume I have to define the root directive to make try_files work, but what would I actually have to define, to make nginx use try_files for index.html *within* the specific bucket? nginx does not know about buckets. It knows about incoming requests, and it knows about files and directories. I *suspect* that you can do what you want with one "location ~ /$" inside your "location /"; but I'm not fully clear on what you want. I also suspect that the correct solution is to just configure the upstream http server to serve the contents of index.html if it exists, when it gets a request ending in / -- presumably there's a reason why that isn't done instead. Good luck with it, f -- Francis Daly francis at daoine.org From lucas at lucasrolff.com Mon May 1 11:50:10 2017 From: lucas at lucasrolff.com (Lucas Rolff) Date: Mon, 1 May 2017 13:50:10 +0200 Subject: Serve index.html file if exists try_files + proxy_pass? In-Reply-To: <20170501111849.GS10157@daoine.org> References: <8B8660DA-E717-46D8-B7EC-016800715842@lucasrolff.com> <20170501111849.GS10157@daoine.org> Message-ID: <590720F2.4070708@lucasrolff.com> Hi Francis, Thanks for your reply. A little about what I'm doing/trying to make work. I use Minio (https://www.minio.io/) - which is a S3 compatible object storage server - it's a simple Go binary, that you pass an argument, simply a directory which you want to store your buckets and files in. In my case, I've created a user called minio, with a homedir of /home/minio I've configured nginx to run under the user "minio" as well, to ensure correct permissions. Minio by default listens to 0.0.0.0:9000, and to simplify working with SSL certificates, I ended up putting nginx on the same machine, and all it does is basically a proxy_pass on localhost:9000 When I access https://minio.box.com// Minio will generate an XML containin a list of objects within a specific bucket (as per S3 API standards). Example: https://gist.github.com/lucasRolff/7a0afb95103f6c93d8bc448f5c1c35f4 Since I do not want to expose this bucket object list, I want to do so if a bucket has the file "index.html" that it will serve this, instead of showing the bucket object list. Minio run from /home/minio with the directory /home/minio being the storage directory. This means, when I create a bucket called "storage", the directory /home/minio/storage will be created - within the storage directory, objects will be placed, as it was normal files, so if I decide to upload index.html, then I will be able to find the exact file, with that name at path /home/minio/storage/index.html Now on nginx, if I have the domain https://minio.box.com/storage/ - what I want to load is /home/minio/storage/index.html if the file exists, else load the bucket object list If I access https://minio.box.com/images/ - it should look for the file /home/minio/images/index.html and serve if existing else load the bucket object list (basically, just proxy_pass as normal). Any other request I do such as https://minio.box.com/images/nginx-rocks.png should go to my upstream server (localhost:9000) > I also suspect that the correct solution is to just configure the upstream http server to serve the contents of index.html if it exists If I could, I would have done that, but it returns a bucket object list as defined in the S3 API standard. nginx itself can have a root /home/minio; defined - and the 'bucket' is just an actual folder on the file-system, with normal files. The only problem I have is to serve index.html from within the current 'bucket', so /images/ would load /home/minio/images/index.html If I do try_files index.html @upstream; Then try_files will base it on the root directive defined, in this case it would try look for /home/minio/index.html if I set the root directive to "/home/minio", correct? I guess I could take try_files "${uri}index.html" @upstream; which would produce something like /home/minio/storage/index.html if you have /storage/ as the URI, but if URI is /storage/image1.png it would try to look for "/home/minio/storage/image1.pngindex.html" and for me that doesn't seem very efficient, since it would have to stat for a file on the file system for every request before actually going to my upstream. I could maybe do: location / { location ~ /$ { try_files "${uri}index.html" @upstream; } // continue normal code here } location @upstream { proxy_pass http://127.0.0.1:9000; } I'm not sure if the above made it more clear. Best Regards, Lucas R Francis Daly wrote: > On Sun, Apr 30, 2017 at 10:44:21AM +0000, Lucas Rolff wrote: > > Hi there, > >> I have a small scenario where I have a backend (s3 compatible storage), which by default generates a directory listing overview of the files stored. >> I want to be able to serve an "index.html" file if the file exists, else just proxy_pass as normally. > > I think that it will be very useful to be utterly clear on the distinction > between a file and a url here. If you can describe what you want to happen > in exact terms, there is a much better chance that the configuration > you want will be clear. > > A file is a thing available on the local nginx filesystem. Its full name > will be some thing like /usr/local/nginx/html/one/two. > > A url is a thing available by proxy_pass:ing to a http server. (That's > good enough for these purposes.) Its full name will be something like > http://upstream/one/two. > > (The http server on upstream may have a direct mapping between urls it > receives and files it knows about; that's because those files are on > upstream's local filesystem. Similarly, nginx receives requests which > are urls, and it may map them to files or to other urls. This can get > confusing. That's why it is useful to be explicit.) > >> https://gist.github.com/lucasRolff/c7ea13305e9bff40eb6729246cd7eb39 >> >> My nginx config for somewhat reason doesn't work ? or maybe it's because I misunderstand how try_files actually work. > > try_files checks for the existence of a file. In the common case, the full > name of the file that it checks is the concatenation of $document_root > with the argument to try_files. > >> So I have URLs such as: >> >> minio.box.com/bucket1/ >> minio.box.com/bucket43253/ >> >> >> When I request these URL's I want nginx to check if index.html exists in the directory (it's an actual file on the filesystem) - if it does, serve this one, else go to @minio location. > > Can you be specific here, with a worked example? > > The request to nginx is for /one/two/. What do you want nginx to do? (If > you mention the word "file", please use the full name of the file that > you are interested in.) > > Then, a separate request to nginx is for /one/three. Same question. > >> For any other file within the directory, I will just go to @minio location so if I request unicorn.png it should go in @minio location as well. >> >> Is there any decent (non-evil) way of doing this? >> >> I assume I have to define the root directive to make try_files work, but what would I actually have to define, to make nginx use try_files for index.html *within* the specific bucket? > > nginx does not know about buckets. It knows about incoming requests, > and it knows about files and directories. > > > I *suspect* that you can do what you want with one "location ~ /$" > inside your "location /"; but I'm not fully clear on what you want. > > I also suspect that the correct solution is to just configure the > upstream http server to serve the contents of index.html if it exists, > when it gets a request ending in / -- presumably there's a reason why > that isn't done instead. > > Good luck with it, > > f -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 1 12:08:11 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 1 May 2017 13:08:11 +0100 Subject: Serve index.html file if exists try_files + proxy_pass? In-Reply-To: <590720F2.4070708@lucasrolff.com> References: <8B8660DA-E717-46D8-B7EC-016800715842@lucasrolff.com> <20170501111849.GS10157@daoine.org> <590720F2.4070708@lucasrolff.com> Message-ID: <20170501120811.GT10157@daoine.org> On Mon, May 01, 2017 at 01:50:10PM +0200, Lucas Rolff wrote: Hi there, Thanks for the extra explanation. It is clear to me now. > When I access https://minio.box.com// Minio will generate an > XML containin a list of objects within a specific bucket (as per S3 > API standards). > > Example: https://gist.github.com/lucasRolff/7a0afb95103f6c93d8bc448f5c1c35f4 > > Since I do not want to expose this bucket object list, I want to do > so if a bucket has the file "index.html" that it will serve this, > instead of showing the bucket object list. Ok. For info: that *will* expose the bucket object list if there is no index.html. You may prefer a static fallback page, or an error indication instead. > If I access https://minio.box.com/images/ - it should look for the > file /home/minio/images/index.html and serve if existing else load > the bucket object list (basically, just proxy_pass as normal). > > Any other request I do such as > https://minio.box.com/images/nginx-rocks.png should go to my > upstream server (localhost:9000) > If I do try_files index.html @upstream; > > Then try_files will base it on the root directive defined, in this > case it would try look for /home/minio/index.html if I set the root > directive to "/home/minio", correct? Correct. > I guess I could take try_files "${uri}index.html" @upstream; which > would produce something like /home/minio/storage/index.html if you > have /storage/ as the URI, but if URI is /storage/image1.png it > would try to look for "/home/minio/storage/image1.pngindex.html" and > for me that doesn't seem very efficient, since it would have to stat > for a file on the file system for every request before actually > going to my upstream. Also correct. > I could maybe do: > > location / { > location ~ /$ { > try_files "${uri}index.html" @upstream; > > } > > // continue normal code here > } > > location @upstream { > proxy_pass http://127.0.0.1:9000; > } That is what I would suggest. Where "// continue normal code here" is "proxy_pass http://127.0.0.1:9000;". And "root /home/minio;" is set somewhere so that it applies where try_files is. Good luck with it, f -- Francis Daly francis at daoine.org From chadhansen at google.com Mon May 1 20:05:50 2017 From: chadhansen at google.com (Chad Hansen) Date: Mon, 01 May 2017 20:05:50 +0000 Subject: unable to log upstream_bytes_sent In-Reply-To: References: Message-ID: Bumping; does anyone have experience using upstream_bytes_sent or upstream_byes_received? On Tue, Apr 25, 2017 at 2:53 PM Chad Hansen wrote: > I'm running nginx 1.11.9, and I get an error any time I try to use > bytes_sent, upstream_bytes_sent, or upstream_bytes_received. I've tried > logging directly in a log format, or using in a map: > > > > map $request_method $chad_sent { > default $upstream_bytes_sent; > } > > map $request_method $chad_received { > default $upstream_bytes_received; > } > > This results in: > > me at here:~/Downloads$ sudo service nginx restart > * Restarting nginx nginx > > nginx: [emerg] unknown "upstream_bytes_sent" variable > nginx: configuration file /etc/nginx/nginx.conf test failed > > It's unclear to me if there's something wrong with my build, or if I have > to use a stream context, or some other issue. > -- Chad Hansen Senior Software Engineer google.com/jigsaw -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4851 bytes Desc: S/MIME Cryptographic Signature URL: From igal at lucee.org Mon May 1 20:50:20 2017 From: igal at lucee.org (Igal @ Lucee.org) Date: Mon, 1 May 2017 13:50:20 -0700 Subject: unable to log upstream_bytes_sent In-Reply-To: References: Message-ID: <8f8b9224-8db4-6af4-b5e6-e10537e6c52f@lucee.org> Chad, On 5/1/2017 1:05 PM, Chad Hansen via nginx wrote: > Bumping; does anyone have experience using upstream_bytes_sent or > upstream_byes_received? > > > nginx: [emerg] unknown "upstream_bytes_sent" variable > nginx: configuration file /etc/nginx/nginx.conf test failed > > It's unclear to me if there's something wrong with my build, or if > I have to use a stream context, or some other issue. > I have never tried to use these variables before, but I plugged them into my log_format directive now to test, and I can confirm that I get the same error on the Windows standard build of nginx 1.11.12, so I don't think that it's your specific build: nginx: [emerg] unknown "upstream_bytes_sent" variable Igal Sapir Lucee Core Developer Lucee.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 1 21:17:08 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 01 May 2017 17:17:08 -0400 Subject: unable to log upstream_bytes_sent In-Reply-To: <8f8b9224-8db4-6af4-b5e6-e10537e6c52f@lucee.org> References: <8f8b9224-8db4-6af4-b5e6-e10537e6c52f@lucee.org> Message-ID: Its part of stream {} http://mailman.nginx.org/pipermail/nginx-devel/2016-September/008752.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273859,274018#msg-274018 From steven.hartland at multiplay.co.uk Mon May 1 22:40:07 2017 From: steven.hartland at multiplay.co.uk (Steven Hartland) Date: Mon, 1 May 2017 23:40:07 +0100 Subject: Trailing Slash Redirect Loop Help In-Reply-To: References: Message-ID: My guess would be that your app is redirecting back to the slash urls Your could test this with a directory on the webserver that has a matching index file. Alternatively point a browser at the upstream and check for redirects directly On 28/04/2017 17:52, Alex Med wrote: > Steven - > > I implemented your suggestion and I still get the same problem with the > directories ... for anything else it works. But when I try to access a > directory ... I can see on the browser address bar the / appearing and > disappearing and then it finally does but the browser gives this error: > > Too many redirects occurred trying to open https://xxxx.com/xxx". This > might occur if you open a page that is redirected to open another page with > then is redirected to open the original page. > > I checked and no my configuration I do not redirect anything to a directory > to give that error. > > Thanks for your help! > > Alex > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273964,273971#msg-273971 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From ib at baab.de Tue May 2 12:03:25 2017 From: ib at baab.de (Ingo Baab) Date: Tue, 2 May 2017 14:03:25 +0200 Subject: performance using variables? Message-ID: <2b9e6300-a7ad-0a57-8c1a-c69d7c51066a@baab.de> Hello List! I got a question regarding performance of my nginx configuration files using variables. Will there be any slow LUA-runtime-parsing if I set a variable $phpuser inside my server block and use it afterwards in several common config files. Q: Is this a performance disadvantage? upstream php7_wpexpress_de { server unix:/var/run/php7.0-fpm-wpexpress_de.sock; } server { set $phpuser "wpexpress_de"; server_name wpexpress.dewww.wpexpress.de; access_log /var/log/nginx/wpexpress.de.access.log rt_cache; error_log /var/log/nginx/wpexpress.de.error.log; root /var/www/wpexpress.de/htdocs; index index.php index.html index.htm; include common/redis-php7.conf; include common/wpcommon-php7.conf; include common/locations-php7.conf; } In the bottom (three) included config-files I subsequently use fastcgi_pass php7_$phpuser; because they do not distinguish per virtual host except for the php-upstream. Is this a good approach to separate php processors for each virtual host or should I do all configuration better static? Thank you in advance for any helpfull information, Ingo Baab, https://baab.de ______ I did read: http://nginx.org/en/docs/faq/variables_in_config.html and also found guys suggesting a global var with nginx-config utilizing map: http://stackoverflow.com/questions/14433309/how-to-define-a-global-variable-in-nginx-conf-file -------------- next part -------------- An HTML attachment was scrubbed... URL: From a20132950 at pucp.pe Tue May 2 12:37:30 2017 From: a20132950 at pucp.pe (SARA QUISPE MEJIA) Date: Tue, 2 May 2017 14:37:30 +0200 Subject: Rename log file based on its content Message-ID: I want to know How can I do for to rename a log file in function to the date of my log file ? I known that in a log file I could find differents dates but I want to save just the first date. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nick at nginx.com Tue May 2 17:17:27 2017 From: nick at nginx.com (Nick Shadrin) Date: Tue, 2 May 2017 10:17:27 -0700 Subject: You can speak at nginx conference this September in Portland. Message-ID: NGINX conference will be held in Portland, Oregon (USA) this September 6th-8th. This is a highly technical event where you will join the most skilled professionals in today's web technology, including the founders and core engineers of NGINX. You are experienced in high performance, web tech, devops, and HTTP, now please share your thoughts among your peers in this technology event. Call for proposals is open now, and will be extended until May 25th. Here's what you need: Conference website: https://www.nginx.com/nginxconf Proposals submitted here: https://nginxconf17.busyconf.com/proposals/new Topics below are merely suggestions: Architecture & Development ? Microservices-based applications ? Migrating to NGINX from hardware or other software solutions ? Auto-scaling systems and infrastructure ? IoT and embedded systems High-Performance Web ? Architecture of high performance web apps ? Tuning of operating systems and network ? Caching, sharding, replication ? Storage and filesystems ? Reducing app latency Operations & Deployment ? Adopting continuous integration and deployment ? Monitoring and observability of modern applications ? Configuration management ? Custom tooling/wiring examples built around NGINX to support CI/CD Case Studies ? Insights and best practices from real-world deployments ? Running hybrid cloud and on-premises systems ? Organizational changes when adopting microservices ? Adopting containerization -- Nick Shadrin / Product Manager / nick at nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed May 3 20:29:09 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 3 May 2017 21:29:09 +0100 Subject: Rename log file based on its content In-Reply-To: References: Message-ID: <20170503202909.GU10157@daoine.org> On Tue, May 02, 2017 at 02:37:30PM +0200, SARA QUISPE MEJIA wrote: Hi there, > I want to know How can I do for to rename a log file in function to the > date of my log file ? > I known that in a log file I could find differents dates but I want to save > just the first date. It is probably easiest to do this outside of nginx. Rename ("mv") your log file to whatever name you want, then send a USR1 signal to the nginx process. http://nginx.org/en/docs/control.html f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 3 20:40:29 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 3 May 2017 21:40:29 +0100 Subject: performance using variables? In-Reply-To: <2b9e6300-a7ad-0a57-8c1a-c69d7c51066a@baab.de> References: <2b9e6300-a7ad-0a57-8c1a-c69d7c51066a@baab.de> Message-ID: <20170503204029.GV10157@daoine.org> On Tue, May 02, 2017 at 02:03:25PM +0200, Ingo Baab wrote: Hi there, > I got a question regarding performance of my nginx configuration > files using variables. The usual rule with performance questions is: if you do not measure a performance difference, then there is not an important performance difference in your use case. > Q: Is this a performance disadvantage? There will be a computer run-time performance disadvantage in using a variable like this. There may be an administrator config-write-time advantage in using a variable like this. > Is this a good approach to separate php processors for each virtual > host or should > I do all configuration better static? Static configuration will have better computer run-time performance. But unless you measure a difference, the difference is not important to you. If what you have works well-enough, you don't need to change anything. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu May 4 06:50:53 2017 From: nginx-forum at forum.nginx.org (Michael Corn) Date: Thu, 04 May 2017 02:50:53 -0400 Subject: Limiting total disk space used by cache + temp files Message-ID: <8223505a9f66c7297aa4867082556138.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to limit the overall disk usage for the cache + any temp files. I have specified: proxy_cache_path /tmp/cache max_size=50m use_temp_path=off; proxy_max_temp_file_size 1m; Yet, what I observe is that the temp file will grow to the full size of the file being retrieved from the upstream, ignoring the value of proxy_max_temp_file_size, and ignoring the max_size for the cache. Only after the file is brought over completely does the cache manager kick in and delete something older from the cache to bring it back in line with the 50mb limit. For example, if I had in the cache file1 (20 MB) and file2 (20 MB), and I receive a request for file3 which is 20 MB, the total size of the /tmp/cache directory will grow to 60 MB. Only after file3 is received in its entirety will file1 be deleted bringing the size back down to 40 MB. I don't mind setting use_temp_path=on, but in that case as well the temp file grows beyond the proxy_max_temp_file_size. Any advice? Michael Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274065,274065#msg-274065 From nginx-forum at forum.nginx.org Thu May 4 08:10:33 2017 From: nginx-forum at forum.nginx.org (beatnut) Date: Thu, 04 May 2017 04:10:33 -0400 Subject: listen fastopen Message-ID: Hello, Does anybody know what this warning which i found in the docs http://nginx.org/en/docs/http/ngx_http_core_module.html#listen in the context of fastopen means? "Do not enable this feature unless the server can handle receiving the same SYN packet with data more than once." My kernel is 4.9.9 and cat /proc/sys/net/ipv4/tcp_fastopen is 3 so this feature is supported. Is this enought ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274066,274066#msg-274066 From mdounin at mdounin.ru Thu May 4 11:00:49 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 May 2017 14:00:49 +0300 Subject: Limiting total disk space used by cache + temp files In-Reply-To: <8223505a9f66c7297aa4867082556138.NginxMailingListEnglish@forum.nginx.org> References: <8223505a9f66c7297aa4867082556138.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170504110048.GR43932@mdounin.ru> Hello! On Thu, May 04, 2017 at 02:50:53AM -0400, Michael Corn wrote: > I'm trying to limit the overall disk usage for the cache + any temp files. > > I have specified: > proxy_cache_path /tmp/cache max_size=50m > use_temp_path=off; > proxy_max_temp_file_size 1m; > > Yet, what I observe is that the temp file will grow to the full size of the > file being retrieved from the upstream, ignoring the value of > proxy_max_temp_file_size, and ignoring the max_size for the cache. Only > after the file is brought over completely does the cache manager kick in and > delete something older from the cache to bring it back in line with the 50mb > limit. > > For example, if I had in the cache file1 (20 MB) and file2 (20 MB), and I > receive a request for file3 which is 20 MB, the total size of the /tmp/cache > directory will grow to 60 MB. Only after file3 is received in its entirety > will file1 be deleted bringing the size back down to 40 MB. > > I don't mind setting use_temp_path=on, but in that case as well the temp > file grows beyond the proxy_max_temp_file_size. > > Any advice? What you are trying to do is not going to work. First of all, proxy_max_temp_file_size is ignored when using cache. Quoting http://nginx.org/r/proxy_max_temp_file_size: : This restriction does not apply to responses that will be cached : or stored on disk. As for the cache max_size parameter, it is maintained by the cache manager process, and it does so after the files are already in the cache. Quoting http://nginx.org/r/proxy_cache_path: : The special "cache manager" process monitors the maximum cache : size set by the max_size parameter. When this size is exceeded, it : removes the least recently used data. Moreover, cache manager may be busy with other tasks or just sleeping, so there is inevitable difference between max_size configured and possible maximum size of the cache. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu May 4 11:09:00 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 4 May 2017 14:09:00 +0300 Subject: listen fastopen In-Reply-To: References: Message-ID: <20170504110900.GS43932@mdounin.ru> Hello! On Thu, May 04, 2017 at 04:10:33AM -0400, beatnut wrote: > Hello, > Does anybody know what this warning which i found in the docs > http://nginx.org/en/docs/http/ngx_http_core_module.html#listen in the > context of fastopen means? > > "Do not enable this feature unless the server can handle receiving the same > SYN packet with data more than once." > > My kernel is 4.9.9 and cat /proc/sys/net/ipv4/tcp_fastopen is 3 so this > feature is supported. Is this enought ? No, this note is not about what your kernel can handle, but rather about further processing by nginx and backend servers. Main problem with fastopen is that it allows unintentional request duplication: the same request may be received (and answered by nginx) more than once. This is usually ok for static files, but may not be ok for various dynamic scripts and so on. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu May 4 11:17:24 2017 From: nginx-forum at forum.nginx.org (beatnut) Date: Thu, 04 May 2017 07:17:24 -0400 Subject: listen fastopen In-Reply-To: <20170504110900.GS43932@mdounin.ru> References: <20170504110900.GS43932@mdounin.ru> Message-ID: Thank you, now i understand. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274066,274070#msg-274070 From nginx-forum at forum.nginx.org Thu May 4 13:31:38 2017 From: nginx-forum at forum.nginx.org (nicktgr15) Date: Thu, 04 May 2017 09:31:38 -0400 Subject: Nginx access log / logrotate, tailing and td-agent Message-ID: <13a0405b0446325c708a1a720f42e915.NginxMailingListEnglish@forum.nginx.org> Hi, We are using td-agent to tail nginx logs and push them to s3 (every minute). We are rotating access logs every 10 minutes and we've noticed that when a log rotation happens log lines still go to the rotated log file for up to a minute after the rotation. Because of this we are losing some log lines when the rotation happens as td-agent follows the rotated log file for only 5 sec and then switches to the new one. We checked the contents of the rotated gzs and during that first minute after the rotation not all log lines go to the rotated log file.. some go to the newly created access log (could be different workers?). Is this the expected behaviour? In our case we'll probably configure td-agent to tail the rotated log file for 60 seconds before switching to the new one but we are wondering if nginx behaves as expected. The following postrotate script is used in logrotate config. postrotate [ ! -f /var/run/nginx.pid ] || /bin/kill -USR1 `cat /var/run/nginx.pid` endscript Regards, Nik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274071,274071#msg-274071 From nginx-forum at forum.nginx.org Fri May 5 08:59:57 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Fri, 05 May 2017 04:59:57 -0400 Subject: Return Specific Error Page in NGinX when all the upstream servers are marked down Message-ID: <961cb559349dc5027ad6e6b0da9dfd19.NginxMailingListEnglish@forum.nginx.org> I have an upstream block as follows upstream sample{ server abc1.example.com down; server abd2.example.com down; } Currently I get a 502 error. In this special case where I receive a 502 and all upstream servers are down I would like a receive a specific error page as temporarily unavailable. How can i achieve that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274075,274075#msg-274075 From maxim at nginx.com Fri May 5 10:15:59 2017 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 5 May 2017 13:15:59 +0300 Subject: [ANN] OpenResty 1.11.2.3 released In-Reply-To: References: Message-ID: <273fcc74-5cb9-e7d8-c9eb-7a98d4dbc8db@nginx.com> Hi Yichun, On 22/04/2017 04:31, Yichun Zhang (agentzh) wrote: > Hi folks, > > Long time no releases. We've been very busy setting up the OpenResty > Inc. commercial company in the US. That's why we've been quiet in the > last few months. The good news is that we now have a strong full-time > engineering team that can work on both the OpenResty open source > platform and higher-level commercial products based on that. The > OpenResty web platform will always remain open source. There's no > doubt about that ;) > [...] This is the great news, our congratulations! We wish you and all your team success with your project. -- Maxim Konovalov From mantoak at gmail.com Fri May 5 11:20:17 2017 From: mantoak at gmail.com (=?UTF-8?Q?Antonio_Gonz=C3=A1lez?=) Date: Fri, 5 May 2017 13:20:17 +0200 Subject: Nginx rewrite rules codeigniter Message-ID: I have an application in codeigniter. This same application has a common code made in codeigniter and then distributed in subdirectories each of which has a configuration, but all pull the same common code. The problem I have is that the application used to run in apache and everything works correctly using the .htaccess. Now I'm migrating the application to nginx and everything works fine if we put the index.php (eg http://baybay.es/ Farmaciacm / index.php / dashboard) but if I remove it does not work. I have tried several configurations in nginx but none solves the problem. I would need someone with nginx knowledge to see if I applied a proper configuration to be able to run the application without the index.php. The structure I have of directories is: the project root is in /home_datos/fisiotes/domains/baybay.es/public_html/ subdirectories are: / quadromandos / -> common code for all applications (codeigniter, but only for code , Etc.) / farmaciacm / -> an application (here if there is an index.php) / farmaciacm1 / -> another application (here if there is an index.php) In root there is no code, Everything is in subdirectories. A greeting. Libre de virus. www.avast.com <#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 5 12:27:34 2017 From: nginx-forum at forum.nginx.org (beatnut) Date: Fri, 05 May 2017 08:27:34 -0400 Subject: geoip variables evaluation vs map Message-ID: Hello! When using variables via map directive they are evaluated when they are used. My question is if variables from geoip module like $geoip_country_code are evaluated only when they are used, like map or every time? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274078,274078#msg-274078 From mdounin at mdounin.ru Sat May 6 01:34:10 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 6 May 2017 04:34:10 +0300 Subject: geoip variables evaluation vs map In-Reply-To: References: Message-ID: <20170506013409.GX43932@mdounin.ru> Hello! On Fri, May 05, 2017 at 08:27:34AM -0400, beatnut wrote: > Hello! > When using variables via map directive they are evaluated when they are > used. > My question is if variables from geoip module like $geoip_country_code are > evaluated only when they are used, like map or every time? Only when they are used. -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Sat May 6 22:30:27 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Sun, 7 May 2017 00:30:27 +0200 Subject: Return Specific Error Page in NGinX when all the upstream servers are marked down In-Reply-To: <961cb559349dc5027ad6e6b0da9dfd19.NginxMailingListEnglish@forum.nginx.org> References: <961cb559349dc5027ad6e6b0da9dfd19.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170507003027.3080e077@homeal002> Hi shivramg94. shivramg94 written on Fri, 05 May 2017 04:59:57 -0400: > I have an upstream block as follows > > upstream sample{ > server abc1.example.com down; > server abd2.example.com down; > } > > Currently I get a error. In this special case where I receive a > 502 and all upstream servers are down I would like a receive a > specific error page as temporarily unavailable. > > How can i achieve that? http://nginx.org/en/docs/http/ngx_http_core_module.html#error_page error_page 502 "specific error page" Hth Aleks > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,274075,274075#msg-274075 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun May 7 08:31:37 2017 From: nginx-forum at forum.nginx.org (Vishnu Priya Matha) Date: Sun, 07 May 2017 04:31:37 -0400 Subject: Issues with limit_req_zone. Message-ID: In limit_req_zone with rate set to 100/s and burst=50, we have below observation . Scenario1 ========== no. of request made by jmeter = 170 # of request expected to be failing = 20 # of request actually failed = 23 Question: why 3 more request are failing and is this much of failure expected Scenario2 ========== no. of request made by jmeter = 160 # of request expected to be failing = 10 # of request actually failed = 14 Question: why 4 more request are failing and is this much of failure expected Scenario3 ========== no. of request made by jmeter = 145 # of request expected to be failing = 0 # of request actually failed = 4 Question: why 4 more request are failing when all expected to pass and is this much of failure expected Why is there a variation on the numbers than actual numbers specified. ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274089,274089#msg-274089 From michael.salmon at ericsson.com Sun May 7 10:16:59 2017 From: michael.salmon at ericsson.com (Michael Salmon) Date: Sun, 7 May 2017 10:16:59 +0000 Subject: Unix domain socket and restart Message-ID: <5E0320F6-988F-4350-AA06-2BFFF9760A1F@ericsson.com> I started testing using a Unix domain socket so that I could a way to send a message when my site was down for maintenance but I ran into a problem. When I restarted nginx it complained that the sockets already existed and wouldn't start. Is this the expected behaviour? /Michael Salmon SE KI34 06 341C (OLC: 9FFVCX34+67Q8) +46 722 184 909 From smart.imran003 at gmail.com Sun May 7 10:18:03 2017 From: smart.imran003 at gmail.com (Syed Imran) Date: Sun, 7 May 2017 15:48:03 +0530 Subject: Unix domain socket and restart In-Reply-To: <5E0320F6-988F-4350-AA06-2BFFF9760A1F@ericsson.com> References: <5E0320F6-988F-4350-AA06-2BFFF9760A1F@ericsson.com> Message-ID: Test mail, On Sun, May 7, 2017 at 3:46 PM, Michael Salmon wrote: > I started testing using a Unix domain socket so that I could a way to send > a message when my site was down for maintenance but I ran into a problem. > When I restarted nginx it complained that the sockets already existed and > wouldn't start. Is this the expected behaviour? > > /Michael Salmon > SE KI34 06 341C (OLC: 9FFVCX34+67Q8) > +46 722 184 909 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From smart.imran003 at gmail.com Sun May 7 10:20:07 2017 From: smart.imran003 at gmail.com (Syed Imran) Date: Sun, 7 May 2017 15:50:07 +0530 Subject: Docker client gets 405 error in nginx during docker push/pull Message-ID: HI All, Below is my issue, I have contacted artifactory support already and no much help from them, if someone can help me with this, will be of great help. Described my issue here. https://github.com/docker/distribution/issues/2266 I have already tried to increase the nginx proxy timeout from default value 90 to 4000sec. Now i have started getting the below error for that. (500 error) in Jenkins *+ docker push 10.39.228.151:9000/controller-platform:1.3.1-int.4215 * *The push refers to a repository [10.39.228.151:9000/controller-platform ]* *Error: Status 500 trying to push repository controller-platform: "\r\n500 Internal Server Error\r\n\r\n

500 Internal Server Error

\r\n
nginx/1.9.15
\r\n\r\n\r\n"* Snippet from nginx error is as below, *2017/04/26 20:18:22 [error] 7#7: *34966 open() "/etc/nginx/html/v1/_ping" failed (2: No such file or directory), client: 10.40.210.70, server: 10.39.228.151, request: "GET /v1/_ping HTTP/1.1", host: "10.39.228.151:9000 "* *Thanks,* *Syed* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sun May 7 12:24:43 2017 From: nginx-forum at forum.nginx.org (S.A.N) Date: Sun, 07 May 2017 08:24:43 -0400 Subject: Unix domain socket and restart In-Reply-To: <5E0320F6-988F-4350-AA06-2BFFF9760A1F@ericsson.com> References: <5E0320F6-988F-4350-AA06-2BFFF9760A1F@ericsson.com> Message-ID: <90f2e20ff501f1a6dfd413df18d83921.NginxMailingListEnglish@forum.nginx.org> Michael Salmon Wrote: ------------------------------------------------------- > I started testing using a Unix domain socket so that I could a way to > send a message when my site was down for maintenance but I ran into a > problem. When I restarted nginx it complained that the sockets already > existed and wouldn't start. Is this the expected behaviour? No, is bug. Here fix http://hg.nginx.org/pkg-oss/rev/fa1476eab346 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274090,274093#msg-274093 From r at roze.lv Sun May 7 12:47:52 2017 From: r at roze.lv (Reinis Rozitis) Date: Sun, 7 May 2017 15:47:52 +0300 Subject: Docker client gets 405 error in nginx during docker push/pull In-Reply-To: References: Message-ID: <000001d2c730$24213780$6c63a680$@roze.lv> > 2017/04/26 20:18:22 [error] 7#7: *34966 open() "/etc/nginx/html/v1/_ping" failed (2: No such file or directory), client: 10.40.210.70, server: 10.39.228.151, request: "GET /v1/_ping HTTP/1.1", host: "10.39.228.151:9000" I'm not familiar with the software stack you're using but just looking at the error and your nginx configuration there is actually nothing that handles the 'v1/_ping' request which then returns some default 404 nginx page (what might or might not result In the situation (internal error) you are seeing). Your configuration server blocks have: rewrite ^/(v2)/(.*) /artifactory/api/docker/docker-candidate-release/$1/$2; could indicate that there is some version mismatch between the backend software. You could try to add also to add also 'v1' to the rewrite (depends if the /artifactory location can process the 'v1'): rewrite ^/(v1|v2)/(.*) /artifactory/api/docker/docker-candidate-release/$1/$2; But in general I would suggest the same as the people in the docker issue told you - to contact/consult with the Artifactory support/devs. rr From nginx-forum at forum.nginx.org Mon May 8 05:59:43 2017 From: nginx-forum at forum.nginx.org (seth2958) Date: Mon, 08 May 2017 01:59:43 -0400 Subject: blank page cached ONLY for homepage URL on Wordpress when using keyword monitoring Message-ID: <8aa964ec3bea9b6a42668fed844b9be1.NginxMailingListEnglish@forum.nginx.org> Hi all, new to this awesome community, and would greatly appreciate some help. I love Nginx but for months I've been trying to tackle a very strange issue where fastcgi is caching a blank page when monitoring tools like Monitis or Uptime Robot run keyword-based uptime monitors for the root URL only. In this case the site is http://musikandfilm.com. All child pages do not experience the blank page issue. It only occurs for http://musikandfilm.com. (However, child pages do sometimes intermittently get blank pages cached if I leave off the "/" at the end of the URL field in my monitor settings, so I think this may have something to do with the issue.) When I delete the cached page and visit the homepage in my browser, I see fastcgi cache the page correctly. But when I delete the cached page and allow the monitors to hit the site first, a blank home page is cached that looks like the example below, even though I get a 200 response. I believe that the Monitors are requesting the homepage in a certain way that is causing fastcgi to cache the page incorrectly, but I don't know exactly what's happening. Here's an example of what's cached: `^C^@^@^@^@^@^@^@<88>o^OY^@^@^@^@????????xa^OY^@^@^@^@C\R^N^@^@?^@?^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@ KEY: httpGETmusikandfilm.com/ ^A^F^@^A^@*^F^@Content-type: text/html; charset=UTF-8^M ^M` And that's it, there's no body after this point as there would be if correctly cached. I've been chasing this problem for so long I will gladly pay someone to fix this. I'm not even joking! I've checked my NGINX configs a hundred times at this point and followed all the recommendations from RT, NGINX, and elsewhere for NGINX+Wordpress setups, and I have failed thus far. I've spent at least 100 hours researching this, so if anyone is willing to help it would so so appreciated. I'm running Ubuntu 16.04, by the way. I'll post my NGINX configs below: ============================================ ======= sites-available/default config: ======= ============================================ #move next 4 lines to /etc/nginx/nginx.conf if you want to use fastcgi_cache across many sites fastcgi_cache_path /var/www/html/nginx-cache levels=1:2 keys_zone=WORDPRESS:100m inactive=60m; #fastcgi_cache_key "$scheme$request_method$http_host$request_uri"; fastcgi_cache_use_stale error timeout invalid_header http_500; fastcgi_ignore_headers Cache-Control Expires Set-Cookie; server { listen 80; listen [::]:80; server_name musikandfilm.com www.musikandfilm.com; access_log /var/log/nginx/musikandfilm.com.access.log; error_log /var/log/nginx/musikandfilm.error.log; root /var/www/html; index index.php; set $skip_cache 0; # POST requests and urls with a query string should always go to PHP if ($request_method = POST) { set $skip_cache 1; } if ($query_string != "") { set $skip_cache 1; } # Don't cache uris containing the following segments if ($request_uri ~* "/wp-admin/|/xmlrpc.php|wp-.*.php|/feed/|index.php|sitemap(_index)?.xml") { set $skip_cache 1; } # Don't use the cache for logged in users or recent commenters if ($http_cookie ~* "comment_author|wordpress_[a-f0-9]+|wp-postpass|wordpress_no_cache|wordpress_logged_in") { set $skip_cache 1; } autoindex off; location ~ /purge(/.*) { fastcgi_cache_purge WORDPRESS "$scheme$request_method$host$1"; } location ~* ^.+\.(flv|pdf|avi|mov|mp3|wmv|m4v|webm|aac|ogg|ogv|svg|svgz|eot|otf|woff|mp4|ttf|css|rss|atom|js|jpg|jpeg|gif|png|ico|zip|tgz|gz|rar|bz2|doc|xls|exe|ppt|tar|mid|midi|wav|bmp|rtf)$ { expires max; log_not_found off; access_log off; } location / { try_files $uri $uri/ /index.php?$args; } location ~ \.php$ { try_files $uri /index.php; include fastcgi_params; include /etc/nginx/snippets/fastcgi-php.conf; fastcgi_pass unix:/run/php/php7.0-fpm.sock; if ($request_method = HEAD) { set $skip_cache 1; } fastcgi_cache_bypass $skip_cache; fastcgi_no_cache $skip_cache; fastcgi_cache WORDPRESS; fastcgi_cache_valid 200 60m; } location ~* ^/wp-includes/.*(? Hi As known, the keepalive directive can activate the connections cache for upstream servers. I know the connection pool is in each worker process. But I'ms confused that the connection number is for each upstream server or is shared for all servers? It's documented at http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive Thanks Xiaofeng Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274098,274098#msg-274098 From francis at daoine.org Mon May 8 18:21:44 2017 From: francis at daoine.org (Francis Daly) Date: Mon, 8 May 2017 19:21:44 +0100 Subject: blank page cached ONLY for homepage URL on Wordpress when using keyword monitoring In-Reply-To: <8aa964ec3bea9b6a42668fed844b9be1.NginxMailingListEnglish@forum.nginx.org> References: <8aa964ec3bea9b6a42668fed844b9be1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170508182144.GX10157@daoine.org> On Mon, May 08, 2017 at 01:59:43AM -0400, seth2958 wrote: Hi there, I do not have an answer for you, but I have some suggestions which might help you identify the problem and solution. > fastcgi is caching a blank page when monitoring tools like > Monitis or Uptime Robot run keyword-based uptime monitors for the root URL > only. Can you use tcpdump or something similar to identify the exact request that is made? (Including all of the headers that are sent.) If it reliably fails, then you should be able to construct an equivalent "curl" command to make the same request, and see it fail yourself. > location ~ /purge(/.*) { > fastcgi_cache_purge WORDPRESS > "$scheme$request_method$host$1"; I think that that says that you are using nginx-plus, in which case there may be a better place to send your request for a timely response. > location ~ \.php$ { > if ($request_method = HEAD) { > set $skip_cache 1; > } That is "if" inside "location"; it is very easy to go wrong using that construct. It might be worth considering moving those three lines outside of all "location" blocks, to match the other similar ones. Good luck with it, f -- Francis Daly francis at daoine.org From DouglasL at westmarine.com Mon May 8 22:17:53 2017 From: DouglasL at westmarine.com (Douglas Landau) Date: Mon, 8 May 2017 15:17:53 -0700 Subject: NGINX stops redirecting Message-ID: For some reason NGINX sometimes stops serving my tomcat pages and starts wanting to serve pages from .../nginx/html/. I don't get it. On Friday, at 13:39, I was happily browsing my XWiki site, as you can see from the NGINX access_log. Then from that log you see no activity until 10:31 this morning, at which time it no longer wants to redirect hits to port 8080 or to /xwiki, it just wants to serve from its own html/ subdir. When I come in this morning, When I go to hit the site NGINX shows me the placeholder page, .../nginx/html/index.html, which starts out "If you can see this page, ..." I can still hit my xwiki at :8080/xwiki. It has not been restarted, altho so what if it had? NGINX has not been restarted, and the config files (conf/nginx.conf, conf.d/tomcat.conf) have not changed. Anybody else seen this? Why did it change behavior, and doe it have anything to do with the "signal process started" messages? Thanks. Sorry if this is already discussed I just now koined the mailing list and will look thru the archives now. NGINX Access log, showing a few hits to xwiki at 13:39 on 05/05, then non-redirected hits at 10:31-10:41 AM on 05/08 ------------------ 10.13.107.52 - - [05/May/2017:13:39:39 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.WebHome HTTP/1.1" 200 1671 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [05/May/2017:13:39:41 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.Projects.WebHome HTTP/1.1" 200 1178 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [05/May/2017:13:39:43 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.Projects.Project+A.WebHome HTTP/1.1" 200 4330 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [05/May/2017:13:39:45 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.Projects.Project+A.Design.WebHome HTTP/1.1" 200 6935 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:30:44 -0700] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:30:45 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "http://dwswiki10/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:31:27 -0700] "GET /xwiki HTTP/1.1" 404 571 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:31:44 -0700] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:31:44 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "https://dwswiki10/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:41:26 -0700] "GET / HTTP/1.1" 200 612 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" 10.13.107.52 - - [08/May/2017:10:41:26 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" 10.13.107.52 - - [08/May/2017:10:41:29 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" 10.13.107.52 - - [08/May/2017:10:41:29 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" NGINX error log, showing some " signal process started" messages: -------------------------------- 2017/05/05 00:27:05 [error] 14431#0: *6148 open() "/data/nginx/html/servlet/admin" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /servlet/admin?category=server&method=listAll&Authorization=Digest+username%3D%22admin%22%2C+response%3D%22ae9f86d6beaa3f9ecb9a5b7e072a4138%22%2C+nonce%3D%222b089ba7985a883ab2eddcd3539a6c94%22%2C+realm%3D%22adminRealm%22%2C+uri%3D%22%2Fservlet%2Fadmin%22&service= HTTP/1.0" 2017/05/05 00:27:05 [error] 14431#0: *6152 open() "/data/nginx/html/servlet/admin" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /servlet/admin?category=server&method=listAll&Authorization=Digest+username%3D%22admin%22%2C+response%3D%22ae9f86d6beaa3f9ecb9a5b7e072a4138%22%2C+nonce%3D%222b089ba7985a883ab2eddcd3539a6c94%22%2C+realm%3D%22adminRealm%22%2C+uri%3D%22%2Fservlet%2Fadmin%22&service= HTTP/1.0" 2017/05/05 00:27:05 [error] 14431#0: *6156 "/data/nginx/html/HTTP1.0/index.html" is not found (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /HTTP1.0/" 2017/05/05 00:27:12 [error] 14431#0: *6443 open() "/data/nginx/html/home.htm" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /home.htm HTTP/1.1", host: "10.13.4.247" 2017/05/05 00:27:16 [error] 14431#0: *6550 open() "/data/nginx/html/spipe" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST /spipe?Source=nessus HTTP/1.0" 2017/05/05 00:27:16 [error] 14431#0: *6552 open() "/data/nginx/html/spipe" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST /spipe?Source=nessus HTTP/1.0" 2017/05/05 00:27:20 [error] 14431#0: *6661 open() "/data/nginx/html/logout" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /logout HTTP/1.0" 2017/05/05 00:27:20 [error] 14431#0: *6668 open() "/data/nginx/html/logout" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /logout HTTP/1.0" 2017/05/05 00:27:22 [error] 14431#0: *6752 open() "/data/nginx/html/yYWiy2DH.asp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET //yYWiy2DH.asp HTTP/1.0" 2017/05/05 00:27:22 [error] 14431#0: *6757 open() "/data/nginx/html/tnq0ObbP.asp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET //tnq0ObbP.asp HTTP/1.0" 2017/05/05 00:27:25 [error] 14431#0: *6882 open() "/data/nginx/html/content/YmEIoHwZQ6.mp3" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /content/YmEIoHwZQ6.mp3 HTTP/1.0" 2017/05/05 00:27:25 [error] 14431#0: *6885 open() "/data/nginx/html/content/YmEIoHwZQ6.mp3" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /content/YmEIoHwZQ6.mp3 HTTP/1.0" 2017/05/05 00:27:46 [error] 14431#0: *7313 open() "/data/nginx/html/iControl/iControlPortal.cgi" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST http://127.0.0.1/iControl/iControlPortal.cgi HTTP/1.1", host: "dwswiki10.westmarine.net" 2017/05/05 00:27:46 [error] 14431#0: *7315 open() "/data/nginx/html/iControl/iControlPortal.cgi" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST http://127.0.0.1/iControl/iControlPortal.cgi HTTP/1.1", host: "dwswiki10.westmarine.net" 2017/05/05 00:27:47 [error] 14431#0: *7342 open() "/data/nginx/html/.anydomain.test" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /.anydomain.test HTTP/1.0" 2017/05/05 00:27:47 [error] 14431#0: *7343 open() "/data/nginx/html/.anydomain.test" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /.anydomain.test HTTP/1.0" 2017/05/05 00:27:47 [error] 14431#0: *7345 open() "/data/nginx/html/index.jsp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /index.jsp HTTP/1.1", host: "sjfklsjfkldfjklsdfjdlksjfdsljk.foo." 2017/05/05 00:27:48 [error] 14431#0: *7351 open() "/data/nginx/html/index.jsp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /index.jsp HTTP/1.1", host: "sjfklsjfkldfjklsdfjdlksjfdsljk.foo." 2017/05/05 00:27:50 [crit] 14431#0: *7396 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 10.13.122.42, server: 0.0.0.0:443 2017/05/05 00:27:50 [crit] 14431#0: *7397 SSL_do_handshake() failed (SSL: error:05066066:Diffie-Hellman routines:COMPUTE_KEY:invalid public key error:1408B005:SSL routines:SSL3_GET_CLIENT_KEY_EXCHANGE:DH lib) while SSL handshaking, client: 10.13.122.42, server: 0.0.0.0:443 2017/05/05 10:40:33 [notice] 20325#0: signal process started 2017/05/05 10:41:12 [notice] 20357#0: signal process started 2017/05/05 10:43:18 [notice] 20416#0: signal process started 2017/05/05 12:59:34 [notice] 23321#0: signal process started 2017/05/08 10:30:45 [error] 23325#0: *132 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10", referrer: "http://dwswiki10/" 2017/05/08 10:31:27 [error] 23325#0: *132 open() "/data/nginx/html/xwiki" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /xwiki HTTP/1.1", host: "dwswiki10" 2017/05/08 10:31:44 [error] 23325#0: *140 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10", referrer: "https://dwswiki10/" 2017/05/08 10:40:43 [notice] 16545#0: signal process started 2017/05/08 10:41:26 [error] 16566#0: *1 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10" 2017/05/08 10:41:29 [error] 16566#0: *1 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10" 2017/05/08 10:42:26 [notice] 16651#0: signal process started My conf/nginx.conf: [root at dwswiki10 conf]# more nginx.conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # server { listen 443 ssl; server_name localhost; ssl_certificate /data/nginx/keys/dwswiki10.westmarine.net.pem; ssl_certificate_key /data/nginx/keys/dwswiki10.westmarine.net.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { root html; index index.html index.htm; } } include ../conf.d/*.conf; } My nginx/conf.d/tomcat.conf: [root at dwswiki10 nginx]# cat conf.d/tomcat.conf server { listen 80; server_name dwswiki10.westmarine.net; # Root to the XWiki application root /data/tomcat/webapps/xwiki; location / { #All "root" requests will have /xwiki appended AND redirected to mydomain.com again rewrite ^ $scheme://$server_name/xwiki$request_uri? permanent; } location ^~ /xwiki { # If path starts with /xwiki - then redirect to backend: XWiki application in Tomcat # Read more about proxy_pass: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; } } server { listen 443; server_name dwswiki10.westmarine.net; # Root to the XWiki application root /data/tomcat/webapps/xwiki; location / { #All "root" requests will have /xwiki appended AND redirected to mydomain.com again rewrite ^ $scheme://$server_name/xwiki$request_uri? permanent; } location ^~ /xwiki { # If path starts with /xwiki - then redirect to backend: XWiki application in Tomcat # Read more about proxy_pass: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; } } The information contained in this transmission may contain West Marine proprietary, confidential and/or privileged information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. To reply to our email administrator directly, please send an email to netadmin at westmarine.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From DouglasL at westmarine.com Mon May 8 22:25:57 2017 From: DouglasL at westmarine.com (Douglas Landau) Date: Mon, 8 May 2017 15:25:57 -0700 Subject: NGINX stops redirecting Message-ID: This is CentOS 7. My /etc/hosts file: [root at dwswiki10 nginx]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 #::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 #127.0.1.1 dwswiki10.westmarine.net dwswiki10 10.13.4.247 dwswiki10.westmarine.net dwswiki10 I must confess I don't understand the 127.0.1.1 line that I commented out. Thanks Doug BTW Are my nginx/conf/nginx.conf and nginx/conf.d/tomcat.conf "server" sections both correct? One says localhost where the other has a fully qualified hostname, but also they both have two "server" sections. Was I supposed to stop using / comment out the two server sections in conf/nginx.conf when I added conf.d/tomcat.conf? My conf/nginx.conf: [root at dwswiki10 conf]# more nginx.conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # server { listen 443 ssl; server_name localhost; ssl_certificate /data/nginx/keys/dwswiki10.westmarine.net.pem; ssl_certificate_key /data/nginx/keys/dwswiki10.westmarine.net.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { root html; index index.html index.htm; } } include ../conf.d/*.conf; } My nginx/conf.d/tomcat.conf: [root at dwswiki10 nginx]# cat conf.d/tomcat.conf server { listen 80; server_name dwswiki10.westmarine.net; # Root to the XWiki application root /data/tomcat/webapps/xwiki; location / { #All "root" requests will have /xwiki appended AND redirected to mydomain.com again rewrite ^ $scheme://$server_name/xwiki$request_uri? permanent; } location ^~ /xwiki { # If path starts with /xwiki - then redirect to backend: XWiki application in Tomcat # Read more about proxy_pass: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; } } server { listen 443; server_name dwswiki10.westmarine.net; # Root to the XWiki application root /data/tomcat/webapps/xwiki; location / { #All "root" requests will have /xwiki appended AND redirected to mydomain.com again rewrite ^ $scheme://$server_name/xwiki$request_uri? permanent; } location ^~ /xwiki { # If path starts with /xwiki - then redirect to backend: XWiki application in Tomcat # Read more about proxy_pass: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; } } From: Douglas Landau Sent: Monday, May 08, 2017 3:18 PM To: 'nginx at nginx.org' Subject: NGINX stops redirecting For some reason NGINX sometimes stops serving my tomcat pages and starts wanting to serve pages from .../nginx/html/. I don't get it. On Friday, at 13:39, I was happily browsing my XWiki site, as you can see from the NGINX access_log. Then from that log you see no activity until 10:31 this morning, at which time it no longer wants to redirect hits to port 8080 or to /xwiki, it just wants to serve from its own html/ subdir. When I come in this morning, When I go to hit the site NGINX shows me the placeholder page, .../nginx/html/index.html, which starts out "If you can see this page, ..." I can still hit my xwiki at :8080/xwiki. It has not been restarted, altho so what if it had? NGINX has not been restarted, and the config files (conf/nginx.conf, conf.d/tomcat.conf) have not changed. Anybody else seen this? Why did it change behavior, and doe it have anything to do with the "signal process started" messages? Thanks. Sorry if this is already discussed I just now koined the mailing list and will look thru the archives now. NGINX Access log, showing a few hits to xwiki at 13:39 on 05/05, then non-redirected hits at 10:31-10:41 AM on 05/08 ------------------ 10.13.107.52 - - [05/May/2017:13:39:39 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.WebHome HTTP/1.1" 200 1671 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [05/May/2017:13:39:41 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.Projects.WebHome HTTP/1.1" 200 1178 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [05/May/2017:13:39:43 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.Projects.Project+A.WebHome HTTP/1.1" 200 4330 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [05/May/2017:13:39:45 -0700] "GET /xwiki/bin/get/Portfolio%20Management/Project%20Budget/WebHome?outputSyntax=plain&sheet=XWiki.DocumentTree&showAttachments=false&showTranslations=false&&data=children&id=document%3Axwiki%3APortfolio+Management.Projects.Project+A.Design.WebHome HTTP/1.1" 200 6935 "http://dwswiki10.westmarine.net/xwiki/bin/view/Portfolio%20Management/Project%20Budget/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:30:44 -0700] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:30:45 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "http://dwswiki10/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:31:27 -0700] "GET /xwiki HTTP/1.1" 404 571 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:31:44 -0700] "GET / HTTP/1.1" 200 612 "-" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:31:44 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "https://dwswiki10/" "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/57.0.2987.133 Safari/537.36" 10.13.107.52 - - [08/May/2017:10:41:26 -0700] "GET / HTTP/1.1" 200 612 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" 10.13.107.52 - - [08/May/2017:10:41:26 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" 10.13.107.52 - - [08/May/2017:10:41:29 -0700] "GET / HTTP/1.1" 304 0 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" 10.13.107.52 - - [08/May/2017:10:41:29 -0700] "GET /favicon.ico HTTP/1.1" 404 571 "-" "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)" NGINX error log, showing some " signal process started" messages: -------------------------------- 2017/05/05 00:27:05 [error] 14431#0: *6148 open() "/data/nginx/html/servlet/admin" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /servlet/admin?category=server&method=listAll&Authorization=Digest+username%3D%22admin%22%2C+response%3D%22ae9f86d6beaa3f9ecb9a5b7e072a4138%22%2C+nonce%3D%222b089ba7985a883ab2eddcd3539a6c94%22%2C+realm%3D%22adminRealm%22%2C+uri%3D%22%2Fservlet%2Fadmin%22&service= HTTP/1.0" 2017/05/05 00:27:05 [error] 14431#0: *6152 open() "/data/nginx/html/servlet/admin" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /servlet/admin?category=server&method=listAll&Authorization=Digest+username%3D%22admin%22%2C+response%3D%22ae9f86d6beaa3f9ecb9a5b7e072a4138%22%2C+nonce%3D%222b089ba7985a883ab2eddcd3539a6c94%22%2C+realm%3D%22adminRealm%22%2C+uri%3D%22%2Fservlet%2Fadmin%22&service= HTTP/1.0" 2017/05/05 00:27:05 [error] 14431#0: *6156 "/data/nginx/html/HTTP1.0/index.html" is not found (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /HTTP1.0/" 2017/05/05 00:27:12 [error] 14431#0: *6443 open() "/data/nginx/html/home.htm" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /home.htm HTTP/1.1", host: "10.13.4.247" 2017/05/05 00:27:16 [error] 14431#0: *6550 open() "/data/nginx/html/spipe" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST /spipe?Source=nessus HTTP/1.0" 2017/05/05 00:27:16 [error] 14431#0: *6552 open() "/data/nginx/html/spipe" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST /spipe?Source=nessus HTTP/1.0" 2017/05/05 00:27:20 [error] 14431#0: *6661 open() "/data/nginx/html/logout" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /logout HTTP/1.0" 2017/05/05 00:27:20 [error] 14431#0: *6668 open() "/data/nginx/html/logout" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /logout HTTP/1.0" 2017/05/05 00:27:22 [error] 14431#0: *6752 open() "/data/nginx/html/yYWiy2DH.asp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET //yYWiy2DH.asp HTTP/1.0" 2017/05/05 00:27:22 [error] 14431#0: *6757 open() "/data/nginx/html/tnq0ObbP.asp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET //tnq0ObbP.asp HTTP/1.0" 2017/05/05 00:27:25 [error] 14431#0: *6882 open() "/data/nginx/html/content/YmEIoHwZQ6.mp3" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /content/YmEIoHwZQ6.mp3 HTTP/1.0" 2017/05/05 00:27:25 [error] 14431#0: *6885 open() "/data/nginx/html/content/YmEIoHwZQ6.mp3" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /content/YmEIoHwZQ6.mp3 HTTP/1.0" 2017/05/05 00:27:46 [error] 14431#0: *7313 open() "/data/nginx/html/iControl/iControlPortal.cgi" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST http://127.0.0.1/iControl/iControlPortal.cgi HTTP/1.1", host: "dwswiki10.westmarine.net" 2017/05/05 00:27:46 [error] 14431#0: *7315 open() "/data/nginx/html/iControl/iControlPortal.cgi" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "POST http://127.0.0.1/iControl/iControlPortal.cgi HTTP/1.1", host: "dwswiki10.westmarine.net" 2017/05/05 00:27:47 [error] 14431#0: *7342 open() "/data/nginx/html/.anydomain.test" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /.anydomain.test HTTP/1.0" 2017/05/05 00:27:47 [error] 14431#0: *7343 open() "/data/nginx/html/.anydomain.test" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /.anydomain.test HTTP/1.0" 2017/05/05 00:27:47 [error] 14431#0: *7345 open() "/data/nginx/html/index.jsp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /index.jsp HTTP/1.1", host: "sjfklsjfkldfjklsdfjdlksjfdsljk.foo." 2017/05/05 00:27:48 [error] 14431#0: *7351 open() "/data/nginx/html/index.jsp" failed (2: No such file or directory), client: 10.13.122.42, server: localhost, request: "GET /index.jsp HTTP/1.1", host: "sjfklsjfkldfjklsdfjdlksjfdsljk.foo." 2017/05/05 00:27:50 [crit] 14431#0: *7396 SSL_do_handshake() failed (SSL: error:14094085:SSL routines:SSL3_READ_BYTES:ccs received early) while SSL handshaking, client: 10.13.122.42, server: 0.0.0.0:443 2017/05/05 00:27:50 [crit] 14431#0: *7397 SSL_do_handshake() failed (SSL: error:05066066:Diffie-Hellman routines:COMPUTE_KEY:invalid public key error:1408B005:SSL routines:SSL3_GET_CLIENT_KEY_EXCHANGE:DH lib) while SSL handshaking, client: 10.13.122.42, server: 0.0.0.0:443 2017/05/05 10:40:33 [notice] 20325#0: signal process started 2017/05/05 10:41:12 [notice] 20357#0: signal process started 2017/05/05 10:43:18 [notice] 20416#0: signal process started 2017/05/05 12:59:34 [notice] 23321#0: signal process started 2017/05/08 10:30:45 [error] 23325#0: *132 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10", referrer: "http://dwswiki10/" 2017/05/08 10:31:27 [error] 23325#0: *132 open() "/data/nginx/html/xwiki" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /xwiki HTTP/1.1", host: "dwswiki10" 2017/05/08 10:31:44 [error] 23325#0: *140 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10", referrer: "https://dwswiki10/" 2017/05/08 10:40:43 [notice] 16545#0: signal process started 2017/05/08 10:41:26 [error] 16566#0: *1 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10" 2017/05/08 10:41:29 [error] 16566#0: *1 open() "/data/nginx/html/favicon.ico" failed (2: No such file or directory), client: 10.13.107.52, server: localhost, request: "GET /favicon.ico HTTP/1.1", host: "dwswiki10" 2017/05/08 10:42:26 [notice] 16651#0: signal process started My conf/nginx.conf: [root at dwswiki10 conf]# more nginx.conf #user nobody; worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; #log_format main '$remote_addr - $remote_user [$time_local] "$request" ' # '$status $body_bytes_sent "$http_referer" ' # '"$http_user_agent" "$http_x_forwarded_for"'; #access_log logs/access.log main; sendfile on; #tcp_nopush on; #keepalive_timeout 0; keepalive_timeout 65; #gzip on; server { listen 80; server_name localhost; #charset koi8-r; #access_log logs/host.access.log main; location / { root html; index index.html index.htm; } #error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # #location ~ \.php$ { # proxy_pass http://127.0.0.1; #} # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # #location ~ \.php$ { # root html; # fastcgi_pass 127.0.0.1:9000; # fastcgi_index index.php; # fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; # include fastcgi_params; #} # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # #location ~ /\.ht { # deny all; #} } # another virtual host using mix of IP-, name-, and port-based configuration # #server { # listen 8000; # listen somename:8080; # server_name somename alias another.alias; # location / { # root html; # index index.html index.htm; # } #} # HTTPS server # server { listen 443 ssl; server_name localhost; ssl_certificate /data/nginx/keys/dwswiki10.westmarine.net.pem; ssl_certificate_key /data/nginx/keys/dwswiki10.westmarine.net.key; ssl_session_cache shared:SSL:1m; ssl_session_timeout 5m; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { root html; index index.html index.htm; } } include ../conf.d/*.conf; } My nginx/conf.d/tomcat.conf: [root at dwswiki10 nginx]# cat conf.d/tomcat.conf server { listen 80; server_name dwswiki10.westmarine.net; # Root to the XWiki application root /data/tomcat/webapps/xwiki; location / { #All "root" requests will have /xwiki appended AND redirected to mydomain.com again rewrite ^ $scheme://$server_name/xwiki$request_uri? permanent; } location ^~ /xwiki { # If path starts with /xwiki - then redirect to backend: XWiki application in Tomcat # Read more about proxy_pass: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; } } server { listen 443; server_name dwswiki10.westmarine.net; # Root to the XWiki application root /data/tomcat/webapps/xwiki; location / { #All "root" requests will have /xwiki appended AND redirected to mydomain.com again rewrite ^ $scheme://$server_name/xwiki$request_uri? permanent; } location ^~ /xwiki { # If path starts with /xwiki - then redirect to backend: XWiki application in Tomcat # Read more about proxy_pass: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass proxy_pass http://localhost:8080; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-Proto $scheme; } } The information contained in this transmission may contain West Marine proprietary, confidential and/or privileged information. It is intended only for the use of the person(s) named above. If you are not the intended recipient, you are hereby notified that any review, dissemination, distribution or duplication of this communication is strictly prohibited. If you are not the intended recipient, please contact the sender by reply email and destroy all copies of the original message. To reply to our email administrator directly, please send an email to netadmin at westmarine.com. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-users-list at whyaskwhy.org Mon May 8 23:17:02 2017 From: nginx-users-list at whyaskwhy.org (deoren) Date: Mon, 8 May 2017 18:17:02 -0500 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? Message-ID: Hi, Thanks for reading this. My apologies if this has been answered before, but after much reading (official docs, mailing list discussions, etc.) I'm still not completely clear on whether this is supported. I know it's a hangup on my part, but I've not managed to get past the stumbling point yet. #1) Have a web app answer to https://subdomain.example.com/ with app related urls like '/login', '/issues', and requests for static resources with URL paths like '/static/styles.css'. The app runs on localhost at http://127.0.0.1:3000 and is proxied by nginx with a direct mapping of http://subdomain.example.com/ to http://127.0.0.1:3000/ and it works well. As indicated, this setup does not use a sub-URI, but treats '/' as its root URL path. #2) I'd like to move the web application to https://subdomain.example.com/sub-uri/ by setting up location block like so (spacing condensed for display purposes): location /flask-demo { root /var/www/passenger-python-flask-demo; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; proxy_pass http://127.0.0.1:3000/; } The trailing slash was added in an attempt to map /flask-demo/SOMETHING to /SOMETHING in the application's point of view. That works well for a test web app where everything is contained in a single file, but when the static assets are referenced by the HTML output the user's browser attempts to pull static content from '/' instead of '/flask-demo/'. I've found that for this and other web applications (demo and production) that I've tested thus you can configure the base URL in the web application itself. Does nginx support redirecting those requests for static resources requests to the associated sub-URI without modifying the web application? If it was only one web application I could setup location blocks for specific patterns, but if I plan on running multiple web applications on a single FQDN (perhaps even different instances of the same web app), each in a separate sub-URI, those web applications might all make requests to '/static/styles.css' based on their original configuration. I'm hoping there is a way to isolate each web application based on the initial location block match, thereby catching follow-up requests for static resources related to the first request and prepend the sub-URI. Thus a request for '/static/styles.css' becomes '/flask-demo/static/styles.css' if I visit '/flask-demo' and if I visit '/other-app' the static request becomes '/other-app/static/styles.css', all without modifying the web application to know it is being run from a sub-URI. I assume the answer is "no, this is not supported", but I wanted to ask just to make sure I'm not overlooking something. Francis Daly's remarks on the "URL-Rewriting not working" thread that I've quoted from below seems to suggest it might be possible, but probably isn't worth the headache: > Note that if you want to reverse-proxy a back-end web > service at a different part of the url hierarchy to > where it believes it is installed, in general you need > the web service to help. > > That is, if you want the back-end / to correspond to > the front-end /x/, then if the back-end ever links to > something like /a, you will need that to become > translated to /x/a before it leaves the front-end. In > general, the front-end cannot do that translation. > > So you may find it easier to configure the back-end to > be (or to act as if it is) installed below /x/ directly. > > Otherwise things can go wrong. I found the 'proxy_redirect' directive, but it doesn't appear to do what I'm looking for. Instead, it appears to be designed specifically to do things like prevent having the client access http://127.0.0.1:3000/ instead of http://127.0.0.1:80/ (as is shown in my example). I've used nginx for years, but only in very basic configurations. This is something new to me and I'm struggling to wrap my head around it. Thank you for reading this and any advice you can offer. From nginx-forum at forum.nginx.org Mon May 8 23:17:46 2017 From: nginx-forum at forum.nginx.org (seth2958) Date: Mon, 08 May 2017 19:17:46 -0400 Subject: blank page cached ONLY for homepage URL on Wordpress when using keyword monitoring In-Reply-To: <20170508182144.GX10157@daoine.org> References: <20170508182144.GX10157@daoine.org> Message-ID: <327a9c4f83c80fc55bcc8d79b4564dd7.NginxMailingListEnglish@forum.nginx.org> Thank you for the insights Francis!! It's too early to tell, but I think moving the "if" statement outside the location block may have done the trick. I also changed the the statement so that only GET requests are cached like so: if ($request_method != GET) { set $skip_cache 1; } I haven't pinpointed the exact cause yet, but initial test results are promising. I set up three times as many keyword monitors and have not seen an empty cached page yet after deleting the existing cached page and letting the cached page rebuild. Only the complete page is being cached. Before this change it was pretty easy for me to break the cache, but now I've been unable to replicate the problem. My theory is that the monitors are making some sort of non-GET (but not HEAD) requests which NGINX is for some reason treating as GET. So hopefully moving the if statement and changing the logic will be the solution here. If anyone else is able to confirm the exact issue, I'd still really like to know. Oh and another possible effect of these changes...page load time has decreased by at least 300 milliseconds! Thanks again!! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274097,274103#msg-274103 From michael.salmon at ericsson.com Tue May 9 08:33:45 2017 From: michael.salmon at ericsson.com (Michael Salmon) Date: Tue, 9 May 2017 08:33:45 +0000 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: References: Message-ID: <1EA8114C-5A27-4FB9-AD64-1B35070143FF@ericsson.com> Yes you can but you need to rewrite the text as it passes through nginx using a sub_filter. It isn't that hard but it is tedious. /Michael Salmon > On 9 May 2017, at 01:18, deoren wrote: > > Hi, > > Thanks for reading this. > > My apologies if this has been answered before, but after much reading (official docs, mailing list discussions, etc.) I'm still not completely clear on whether this is supported. I know it's a hangup on my part, but I've not managed to get past the stumbling point yet. > > #1) Have a web app answer to https://subdomain.example.com/ with app related urls like '/login', '/issues', and requests for static resources with URL paths like '/static/styles.css'. The app runs on localhost at http://127.0.0.1:3000 and is proxied by nginx with a direct mapping of http://subdomain.example.com/ to http://127.0.0.1:3000/ and it works well. As indicated, this setup does not use a sub-URI, but treats '/' as its root URL path. > > #2) I'd like to move the web application to https://subdomain.example.com/sub-uri/ by setting up location block like so (spacing condensed for display purposes): > > location /flask-demo { > root /var/www/passenger-python-flask-demo; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Host $host; > proxy_pass http://127.0.0.1:3000/; > } > > The trailing slash was added in an attempt to map /flask-demo/SOMETHING to /SOMETHING in the application's point of view. > > That works well for a test web app where everything is contained in a single file, but when the static assets are referenced by the HTML output the user's browser attempts to pull static content from '/' instead of '/flask-demo/'. > > I've found that for this and other web applications (demo and production) that I've tested thus you can configure the base URL in the web application itself. > > Does nginx support redirecting those requests for static resources requests to the associated sub-URI without modifying the web application? If it was only one web application I could setup location blocks for specific patterns, but if I plan on running multiple web applications on a single FQDN (perhaps even different instances of the same web app), each in a separate sub-URI, those web applications might all make requests to '/static/styles.css' based on their original configuration. > > I'm hoping there is a way to isolate each web application based on the initial location block match, thereby catching follow-up requests for static resources related to the first request and prepend the sub-URI. Thus a request for '/static/styles.css' becomes '/flask-demo/static/styles.css' if I visit '/flask-demo' and if I visit '/other-app' the static request becomes '/other-app/static/styles.css', all without modifying the web application to know it is being run from a sub-URI. > > I assume the answer is "no, this is not supported", but I wanted to ask just to make sure I'm not overlooking something. Francis Daly's remarks on the "URL-Rewriting not working" thread that I've quoted from below seems to suggest it might be possible, but probably isn't worth the headache: > > > Note that if you want to reverse-proxy a back-end web > > service at a different part of the url hierarchy to > > where it believes it is installed, in general you need > > the web service to help. > > > > That is, if you want the back-end / to correspond to > > the front-end /x/, then if the back-end ever links to > > something like /a, you will need that to become > > translated to /x/a before it leaves the front-end. In > > general, the front-end cannot do that translation. > > > > So you may find it easier to configure the back-end to > > be (or to act as if it is) installed below /x/ directly. > > > > Otherwise things can go wrong. > > I found the 'proxy_redirect' directive, but it doesn't appear to do what I'm looking for. Instead, it appears to be designed specifically to do things like prevent having the client access http://127.0.0.1:3000/ instead of http://127.0.0.1:80/ (as is shown in my example). > > I've used nginx for years, but only in very basic configurations. This is something new to me and I'm struggling to wrap my head around it. Thank you for reading this and any advice you can offer. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 5315 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Tue May 9 09:33:17 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 09 May 2017 05:33:17 -0400 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: <1EA8114C-5A27-4FB9-AD64-1B35070143FF@ericsson.com> References: <1EA8114C-5A27-4FB9-AD64-1B35070143FF@ericsson.com> Message-ID: location /flask-demo/ { root /var/www/passenger-python-flask-demo; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Host $host; rewrite /flask-demo/([^/]+) /$1 break; proxy_pass http://127.0.0.1:3000/; } And then add additional location blocks/rewrites to handle static content. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274104,274106#msg-274106 From francis at daoine.org Tue May 9 12:37:19 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 9 May 2017 13:37:19 +0100 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: References: Message-ID: <20170509123719.GY10157@daoine.org> On Mon, May 08, 2017 at 06:17:02PM -0500, deoren wrote: Hi there, > I'm still > not completely clear on whether this is supported. nginx does the nginx side as much as it can. Whether it all works is almost entirely down to the web app. You cannot reliably reverse-proxy an arbitrary web app to a different place in the url hierarchy than that at which it believes it is installed. You can unreliably do it, if that is good enough for you. > #2) I'd like to move the web application to > https://subdomain.example.com/sub-uri/ by setting up location block > like so (spacing condensed for display purposes): > > location /flask-demo { > root /var/www/passenger-python-flask-demo; > proxy_set_header X-Forwarded-Proto $scheme; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Host $host; > proxy_pass http://127.0.0.1:3000/; > } > > The trailing slash was added in an attempt to map > /flask-demo/SOMETHING to /SOMETHING in the application's point of > view. Just as a small point -- it is usually worth making sure that the numbers of / characters at the end of the "proxy_pass" argument and at the end of the "location" prefix value, are the same. > That works well for a test web app where everything is contained in > a single file, but when the static assets are referenced by the HTML > output the user's browser attempts to pull static content from '/' > instead of '/flask-demo/'. I would suggest that the HTML output of the web app is wrong. It should never link to /static/styles.css, if it wants to be reverse-proxied. It should link to static/styles.css or to ../static/styles.css or to ../../static/styles.css or to whatever value is appropriate relative to the current resource. If the web app did that, then reverse-proxying would probably Just Work. "Not wanting to be reverse-proxied" is a valid choice for a web app. And you can choose not to use ones that make that choice. > I'm hoping there is a way to isolate each web application based on > the initial location block match, thereby catching follow-up > requests for static resources related to the first request and > prepend the sub-URI. In http, every request is independent of every other request, unless you do something to try to tie them together (such as with a Cookie). Stock nginx will receive a request for /static/styles.css, and will handle it according to its configuration. If you want your nginx to receive a request for /static/styles.css, and to do some processing based on the Referer: header or based on a Cookie: header or something else, in order that the final processed request will be /flask-demo/static/styles.css, then you will probably have to write some code in one of the embedded languages in nginx.conf. I'm not aware of code like that existing. > I assume the answer is "no, this is not supported", but I wanted to If you write the code, you can make it work (unreliably) for you. It cannot work in stock nginx just using nginx.conf normal directives. > ask just to make sure I'm not overlooking something. Francis Daly's > remarks on the "URL-Rewriting not working" thread that I've quoted > from below seems to suggest it might be possible, but probably isn't > worth the headache: Your outline above is that nginx should not do the translation in the html sent to the client, but that nginx should interpret the following requests based on something not directly specified in the request line. That is possibly harder than doing the translation in the html. > > That is, if you want the back-end / to correspond to > > the front-end /x/, then if the back-end ever links to > > something like /a, you will need that to become > > translated to /x/a before it leaves the front-end. In > > general, the front-end cannot do that translation. You can try to get the front-end to do that translation; it will also be unreliable and probably inefficient. Basically, any text that the client might interpret as a url that starts with http:// or https:// or / would potentially have to be modified. And possibly some other text as well, depending on the exact contents. > > So you may find it easier to configure the back-end to > > be (or to act as if it is) installed below /x/ directly. That is still true. If you won't do that, and if you have two web apps that each want to be at /, you may be better off having two server{} blocks in nginx with different server_name values. > I found the 'proxy_redirect' directive, but it doesn't appear to do > what I'm looking for. proxy_redirect deals with http headers, not with the body content. > I've used nginx for years, but only in very basic configurations. > This is something new to me and I'm struggling to wrap my head > around it. Thank you for reading this and any advice you can offer. It's a http and url thing, rather than an nginx thing. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 9 13:03:13 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 09 May 2017 09:03:13 -0400 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: <20170509123719.GY10157@daoine.org> References: <20170509123719.GY10157@daoine.org> Message-ID: <027063960cec3ed3a90950c90f77d01f.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > It cannot work in stock nginx just using nginx.conf normal directives. You can with (many) proper rewrites but this does require a very close eye on the logfiles until you have rewritten all the 40x's, and for the sake of performance eventually have them converted to a sequence of maps. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274104,274109#msg-274109 From peter_booth at me.com Tue May 9 19:10:47 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 09 May 2017 15:10:47 -0400 Subject: blank page cached ONLY for homepage URL on Wordpress when using keyword monitoring In-Reply-To: <327a9c4f83c80fc55bcc8d79b4564dd7.NginxMailingListEnglish@forum.nginx.org> References: <20170508182144.GX10157@daoine.org> <327a9c4f83c80fc55bcc8d79b4564dd7.NginxMailingListEnglish@forum.nginx.org> Message-ID: <02DD34C1-F0BC-4F4F-8F33-A6395CF632BD@me.com> Seth, It's actually very easy to reproduce this issue - from a browser request http://musikandfilm.com/?a=b and you will see it. There are a couple of low level tools that expose some possible issues. If you email me directly I can talk about this in more detail. Try peter underscore booth at me dot com. Sent from my iPhone > On May 8, 2017, at 7:17 PM, seth2958 wrote: > > Thank you for the insights Francis!! > > It's too early to tell, but I think moving the "if" statement outside the > location block may have done the trick. I also changed the the statement so > that only GET requests are cached like so: > > if ($request_method != GET) { > set $skip_cache 1; > } > > > I haven't pinpointed the exact cause yet, but initial test results are > promising. I set up three times as many keyword monitors and have not seen > an empty cached page yet after deleting the existing cached page and letting > the cached page rebuild. Only the complete page is being cached. Before this > change it was pretty easy for me to break the cache, but now I've been > unable to replicate the problem. My theory is that the monitors are making > some sort of non-GET (but not HEAD) requests which NGINX is for some reason > treating as GET. So hopefully moving the if statement and changing the logic > will be the solution here. > > If anyone else is able to confirm the exact issue, I'd still really like to > know. > > Oh and another possible effect of these changes...page load time has > decreased by at least 300 milliseconds! > > Thanks again!! > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274097,274103#msg-274103 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Tue May 9 19:20:41 2017 From: nginx-forum at forum.nginx.org (seth2958) Date: Tue, 09 May 2017 15:20:41 -0400 Subject: blank page cached ONLY for homepage URL on Wordpress when using keyword monitoring In-Reply-To: <02DD34C1-F0BC-4F4F-8F33-A6395CF632BD@me.com> References: <02DD34C1-F0BC-4F4F-8F33-A6395CF632BD@me.com> Message-ID: <0d14e011039c7edd021975684f54b9e6.NginxMailingListEnglish@forum.nginx.org> Thanks Peter. I just sent you an email from sethleon at gmail dot com. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274097,274112#msg-274112 From francis at daoine.org Tue May 9 20:26:22 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 9 May 2017 21:26:22 +0100 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: <027063960cec3ed3a90950c90f77d01f.NginxMailingListEnglish@forum.nginx.org> References: <20170509123719.GY10157@daoine.org> <027063960cec3ed3a90950c90f77d01f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170509202622.GZ10157@daoine.org> On Tue, May 09, 2017 at 09:03:13AM -0400, itpp2012 wrote: > Francis Daly Wrote: Hi there, > > It cannot work in stock nginx just using nginx.conf normal directives. > > You can with (many) proper rewrites but this does require a very close eye > on the logfiles until you have rewritten all the 40x's, and for the sake of > performance eventually have them converted to a sequence of maps. >From the original mail: """ If it was only one web application I could setup location blocks for specific patterns, but if I plan on running multiple web applications on a single FQDN (perhaps even different instances of the same web app), each in a separate sub-URI, those web applications might all make requests to '/static/styles.css' based on their original configuration. """ nginx gets a request for /static/styles.css. I'm not aware of a rewrite which will let you decide which of web-app-1 and web-app-2 is the correct one to proxy_pass this request to this time. Both web apps use the same url. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 9 20:58:25 2017 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 09 May 2017 16:58:25 -0400 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: <20170509202622.GZ10157@daoine.org> References: <20170509202622.GZ10157@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > nginx gets a request for /static/styles.css. > > I'm not aware of a rewrite which will let you decide which of > web-app-1 > and web-app-2 is the correct one to proxy_pass this request to this > time. Both web apps use the same url. There is a distinction between web apps, the request url, which you use as a map variable. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274104,274116#msg-274116 From francis at daoine.org Tue May 9 22:17:53 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 9 May 2017 23:17:53 +0100 Subject: NGINX stops redirecting In-Reply-To: References: Message-ID: <20170509221753.GA10157@daoine.org> On Mon, May 08, 2017 at 03:17:53PM -0700, Douglas Landau wrote: Hi there, > I don't get it. On Friday, at 13:39, I was happily browsing my XWiki site, as you can see from the NGINX access_log. Then from that log you see no activity until 10:31 this morning, at which time it no longer wants to redirect hits to port 8080 or to /xwiki, it just wants to serve from its own html/ subdir. > On Friday, your client was accessing things below http://dwswiki10.westmarine.net/ On Monday, your client was accessing things below http://dwswiki10/ They are different requests. Your config snippets show that those two requests are handled in different server{} blocks. Because dwswiki10 is not explicit anywhere, it will be handled in the first server{} here: > [root at dwswiki10 conf]# more nginx.conf > http { > server { > listen 80; > server_name localhost; > location / { > root html; > index index.html index.htm; > } > > } > include ../conf.d/*.conf; > } while you probably want it to be handled in this server{} > [root at dwswiki10 nginx]# cat conf.d/tomcat.conf > server { > listen 80; > server_name dwswiki10.westmarine.net; Either add "default_server" to the "listen" line, or add "dwswiki10" to the "server_name" line. Or add a new server{} which redirects from dwswiki10 to dwswiki10.westmarine.net. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 9 22:42:50 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 9 May 2017 23:42:50 +0100 Subject: Issues with limit_req_zone. In-Reply-To: References: Message-ID: <20170509224250.GB10157@daoine.org> On Sun, May 07, 2017 at 04:31:37AM -0400, Vishnu Priya Matha wrote: Hi there, > In limit_req_zone with rate set to 100/s and burst=50, we have below > observation > . > Scenario1 > ========== > no. of request made by jmeter = 170 > # of request expected to be failing = 20 > # of request actually failed = 23 > > Question: why 3 more request are failing and is this much of failure > expected Why do you expect 20 to fail? I expect 0 to fail. Unless you use "nodelay", in which case the number of failures depends on how quickly the requests are received. Note: "100/s" does not mean "accept 100, then accept no more until 1 second has passed". It means something closer to "accept 1, then accept no more until 0.01 seconds has passed". f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed May 10 06:03:14 2017 From: nginx-forum at forum.nginx.org (zaidahmd) Date: Wed, 10 May 2017 02:03:14 -0400 Subject: Can I secure SOAP web services using NGINX API Gateway ? Message-ID: <082727b5e511a8e2e3fda32e5120dcd5.NginxMailingListEnglish@forum.nginx.org> Hi, I need to know that How to secure SOAP web services using NGINX's API gateway feature ? ** Current Implementation ** I have an application with rest interfaces and is secured by NGINX's AUTH_REQUEST module. For each request reaching NGINX, AUTH_REQUEST sends a SubRequest to /login to the auth applicaiton which returns 200 OK if the credentials are correct and the user is authorized to login. ** Question ** In the same application we have created SOAP web services and need WS-Security to secure the services. We want to use the NGINX to perform WS-Security authentication for SOAP web services also like the way its doing it for RESTful services. But the issue is sending a subrequest to /login which is written to handle RESTful service calls. And in SOAP services our design is to have client certificate authentication, so how can Nginx handle Ws-Sescurity for SOAP services. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274120,274120#msg-274120 From wubingzheng at 163.com Wed May 10 08:26:06 2017 From: wubingzheng at 163.com (Wu Bingzheng) Date: Wed, 10 May 2017 16:26:06 +0800 (CST) Subject: proxy_upstream_next while no live upstreams Message-ID: Hi all, I have an upstream configure with Nginx 1.8.1 : upstream test { server 192.168.0.5; server 192.168.0.6; } Question 1: Assume both of the 2 servers are down. First request tries both of them and fails, and response 502. Nginx marks both of them as DOWN. This is OK. Second request comes and finds there is no live upstreams, then Nginx resets both of servers as UP, logs "no live upstreams", and returns 502. My question is that in the second request, nginx dose NOT try the 2 servers, but just return 502 immediately. Is this in line with expectations? From the code in ngx_http_upstream_next(), ft_type=NGX_HTTP_UPSTREAM_FT_NOLIVE always leads to ngx_http_upstream_finalize_request() while not ngx_http_upstream_connect(). Question 2: (not related with Question 1) In my production environment, 192.168.0.5 is UP, and 192.168.0.6 is DOWN. There are few access logs with $upstream_addr as "192.168.0.6, test", and $status as 502. There were no error logs of connecting/reading 192.168.0.5 fails which mean this server is UP, so I think the request should try 192.168.0.5 after 192.168.0.6. But it does not try 192.168.0.5, and just log "no live upstream" and return 502. The logs like this are very few, and I can not re-produce this or debug it. I just ask it here in case someone else know the problem. Thanks in advance, Wu From mdounin at mdounin.ru Wed May 10 12:19:49 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 May 2017 15:19:49 +0300 Subject: upstream keepalive connections for all servers or each server? In-Reply-To: <9956402d4515be8867f8bc922c045fe2.NginxMailingListEnglish@forum.nginx.org> References: <9956402d4515be8867f8bc922c045fe2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170510121949.GC55433@mdounin.ru> Hello! On Mon, May 08, 2017 at 04:24:59AM -0400, fengx wrote: > As known, the keepalive directive can activate the connections cache for > upstream servers. I know the connection pool is in each worker process. But > I'ms confused that the connection number is for each upstream server or is > shared for all servers? It's documented at > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive It is the number of keepalive connections to be cached for the whole upstream{} block, that is, all servers. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed May 10 12:45:19 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 May 2017 15:45:19 +0300 Subject: proxy_upstream_next while no live upstreams In-Reply-To: References: Message-ID: <20170510124519.GD55433@mdounin.ru> Hello! On Wed, May 10, 2017 at 04:26:06PM +0800, Wu Bingzheng wrote: > I have an upstream configure with Nginx 1.8.1 : > > upstream test { > server 192.168.0.5; > server 192.168.0.6; > } > > Question 1: > Assume both of the 2 servers are down. > First request tries both of them and fails, and response 502. > Nginx marks both of them as DOWN. This is OK. > Second request comes and finds there is no live upstreams, then > Nginx resets both of servers as UP, logs "no live upstreams", > and returns 502. > My question is that in the second request, nginx dose NOT try > the 2 servers, but just return 502 immediately. Is this in line > with expectations? Yes, as long as all servers in an upstream group are already considered unavailable, nginx will return 502 without trying to connect them. You may control when servers are considered unavailable using the max_fails and fail_timeout parameters of the server directives, see here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails Note well that nginx versions before 1.11.5 reset all servers once all servers are unavailable, effectively returning just one 502 per worker process. Since nginx 1.11.5, it will wait for fail_timeout to expire: *) Change: now if there are no available servers in an upstream, nginx will not reset number of failures of all servers as it previously did, but will wait for fail_timeout to expire. > Question 2: (not related with Question 1) > In my production environment, 192.168.0.5 is UP, and 192.168.0.6 > is DOWN. > There are few access logs with $upstream_addr as "192.168.0.6, > test", and $status as 502. > There were no error logs of connecting/reading 192.168.0.5 fails > which mean this server is UP, so I think the request should try > 192.168.0.5 after 192.168.0.6. > But it does not try 192.168.0.5, and just log "no live upstream" > and return 502. > The logs like this are very few, and I can not re-produce this > or debug it. > I just ask it here in case someone else know the problem. See above, this is exactly what is expected to happen when a request to upstream server fails. The 502 / "no live upstream" you are seeing is a result of all servers considered unavailable. There are only few such errors as you are using nginx 1.8.1, which quickly resets failure counters of all servers in such situation. With recent nginx versions, 502 errors will be returned till fail_timeout expiration. If you want nginx to completely ignore errors on the only working upstream server in your environment, consider using "server ... max_fails=0". Alternatively, consider using fail_timeout which is appropriate for your environment. -- Maxim Dounin http://nginx.org/ From wubingzheng at 163.com Wed May 10 14:27:16 2017 From: wubingzheng at 163.com (Wu Bingzheng) Date: Wed, 10 May 2017 22:27:16 +0800 (CST) Subject: proxy_upstream_next while no live upstreams In-Reply-To: <20170510124519.GD55433@mdounin.ru> References: <20170510124519.GD55433@mdounin.ru> Message-ID: <4a23e096.ac39.15bf2c1c470.Coremail.wubingzheng@163.com> Thanks for the answer. Maybe you miss something in Question 2. The server 192.168.0.5 never fails. I think nginx should not return 502 if there is at least one server never fails. Exactly speaking, the server never fails in the last 1 hour and the fail_timeout is the default 10 second. >> Question 2: (not related with Question 1) >> In my production environment, 192.168.0.5 is UP, and 192.168.0.6 >> is DOWN. >> There are few access logs with $upstream_addr as "192.168.0.6, >> test", and $status as 502. >> There were no error logs of connecting/reading 192.168.0.5 fails >> which mean this server is UP, so I think the request should try >> 192.168.0.5 after 192.168.0.6. >> But it does not try 192.168.0.5, and just log "no live upstream" >> and return 502. >> The logs like this are very few, and I can not re-produce this >> or debug it. >> I just ask it here in case someone else know the problem. > >See above, this is exactly what is expected to happen when a >request to upstream server fails. The 502 / "no live upstream" >you are seeing is a result of all servers considered unavailable. >There are only few such errors as you are using nginx 1.8.1, which >quickly resets failure counters of all servers in such situation. >With recent nginx versions, 502 errors will be returned till >fail_timeout expiration. > >If you want nginx to completely ignore errors on the only working >upstream server in your environment, consider using "server ... >max_fails=0". Alternatively, consider using fail_timeout which is >appropriate for your environment. > >-- >Maxim Dounin >http://nginx.org/ >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 10 14:43:07 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 10 May 2017 17:43:07 +0300 Subject: proxy_upstream_next while no live upstreams In-Reply-To: <4a23e096.ac39.15bf2c1c470.Coremail.wubingzheng@163.com> References: <20170510124519.GD55433@mdounin.ru> <4a23e096.ac39.15bf2c1c470.Coremail.wubingzheng@163.com> Message-ID: <20170510144306.GJ55433@mdounin.ru> Hello! On Wed, May 10, 2017 at 10:27:16PM +0800, Wu Bingzheng wrote: > Maybe you miss something in Question 2. The server 192.168.0.5 never fails. > I think nginx should not return 502 if there is at least one server never fails. > Exactly speaking, the server never fails in the last 1 hour and the fail_timeout is the default 10 second. How do you know that the server never fails? The "no live upstreams" error indicate that it failed from nginx point of view, and was considered unavailable. Note that "failure" might not be something specifically logged by nginx, but a response with a specific http code you've configured in proxy_next_upstream, see http://nginx.org/r/proxy_next_upstream. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Wed May 10 14:43:52 2017 From: nginx-forum at forum.nginx.org (Alex Med) Date: Wed, 10 May 2017 10:43:52 -0400 Subject: Trailing Slash Redirect Loop Help In-Reply-To: References: Message-ID: Steve - You are right something else is adding a trailling slash to directories. Is there a way to configure nginx to remove trailing slashes from everything except from directories? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273964,274132#msg-274132 From nginx-forum at forum.nginx.org Wed May 10 15:10:36 2017 From: nginx-forum at forum.nginx.org (Alex Med) Date: Wed, 10 May 2017 11:10:36 -0400 Subject: Trailing Slash Redirect Loop Help In-Reply-To: <20170429123516.GP10157@daoine.org> References: <20170429123516.GP10157@daoine.org> Message-ID: Francis - Yes, I am realizing that is a nightmare going against the trailing-slashed directory nature. So I am going to have this rule take off slashes from anything but directories. Do you have any suggestions as how to do it, but without "if" Thank you so much! Alex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,273964,274134#msg-274134 From achekalin at tango.me Wed May 10 15:33:26 2017 From: achekalin at tango.me (Alexander Chekalin) Date: Wed, 10 May 2017 15:33:26 +0000 Subject: How can I get nginx-rtmp-module stats for all workers? Message-ID: <359AACDE-201B-4B21-A8BA-C61F70176803@tango.me> Hello, I?ve set up nginx with nginx-rtmp-module as a NGINX-based Media Streaming Server, and it works fine, but now I try to get stats for this server. General approach is to use location /stat { rtmp_stat all; } which produce xml that can be parsed nicely, but my suspicions are that I got stats for one worker random worker on each poll (not accumulated stats for all workers). The host itself is 4-cores, so I have 4 workers running and I think I got confusing stats since each poll brings me stats for random worker. Is there any way I can use to get cumulated stats for all workers on host? Thank you in advance! - Alexander From nginx-forum at forum.nginx.org Wed May 10 16:04:39 2017 From: nginx-forum at forum.nginx.org (metalfm1) Date: Wed, 10 May 2017 12:04:39 -0400 Subject: =?UTF-8?B?ZmFzdGNnaSBjYWNoZSBiYWNrZ3JvdW5kIHVwZGF0ZSBzc2kg0L/QvtC00LfQsNC/?= =?UTF-8?B?0YDQvtGB0L7Qsg==?= Message-ID: <08bad806d8496672f0c391b9221be703.NginxMailingListEnglish@forum.nginx.org> ???????????! ????????? fastcgi_cache_background_update ??????? ????? ???? ??? ssi ???????????. ???? ?????? ?? ??????? ?????? ???????, ??????? ???????? ??????????? 1 ???, ???? nginx + php-fpm. ? ???? ??????????? ???????? ???????? ???? ?????? ??????? ????????? ?????? ??????? ????? ???????? ? ????????? ssi ????????? ? ?????????? ??? ?? 1 ???. ???????????? ????????? fastcgi ?????? ?? php ? ??????? ????????? Cache-Control. ? ??????????? ??????? ???, nginx ??????? ???????? ?????????? /ssi_dev/ ? ?????????? ?? ?? ????. ???????? ?????????? ????? ??? ?????????. ??????? ????????? nginx - ???? ???? ??????? ? ????, ?? ?? ??????? ????????(HIT) - ???? ???? ??????? ? ????, ?? ?? ???????, ?? ??????? ???????? ?????????? ??????(STALE) ? ???????? ????????? ?? ??????? ????(EXPIRED) ???????? ??????????? ? ???, ??? ????????? ?? ??????? ???? ??????????? ? ??????????? ??????. ?? ???? ???????? ?????? ???? ?????????? ??????????. ????????? ???? ???????? ?? ???????????, ???? ? ??? ?????? ??? ????????, ????????? nginx ????????????? ????????????. ??????? ???????? ?????? ?????? ???????? ? ???????? ????????????? ????????? ?? ??????????. ?????? nginx location / { try_files $uri /index_dev.php$is_args$args; } location ~ /ssi_dev/ { access_log /var/log/nginx/access-dev.log ssi; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; ssi on; ssi_silent_errors off; add_header X-Cache $upstream_cache_status; add_header X-Cache-Control $upstream_http_cache_control; add_header X-Expires $upstream_http_expires; fastcgi_cache_methods GET HEAD; fastcgi_cache MY_CACHE; fastcgi_cache_min_uses 1; fastcgi_cache_lock on; fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503; fastcgi_cache_background_update on; fastcgi_hide_header Cache-Control; fastcgi_hide_header Expires; fastcgi_param REQUEST_URI $uri$is_args$args; fastcgi_param SCRIPT_FILENAME $document_root/index_dev.php; fastcgi_cache_key $scheme$request_method$host$uri$is_args$args; internal; } location ~ ^/(index_dev|config)\.php(/|$) { access_log /var/log/nginx/access-dev.log main; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_split_path_info ^(.+\.php)(/.*)$; include fastcgi_params; ssi on; ssi_silent_errors off; add_header X-Cache $upstream_cache_status; add_header X-Cache-Control $upstream_http_cache_control; add_header X-Expires $upstream_http_expires; fastcgi_cache_methods GET HEAD; fastcgi_cache MY_CACHE; fastcgi_cache_min_uses 1; fastcgi_cache_lock on; fastcgi_cache_use_stale error timeout invalid_header updating http_500 http_503; fastcgi_cache_background_update on; fastcgi_hide_header Cache-Control; fastcgi_hide_header Expires; fastcgi_param REQUEST_URI $request_uri; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_cache_key $scheme$request_method$host$request_uri$is_args$args; } ?????? ???????? ?? ????????? ?????????? ???????? ???? ??? ssi ??????????? ? ?? ??????????? ??????? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274136,274136#msg-274136 From arut at nginx.com Wed May 10 17:18:46 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 10 May 2017 20:18:46 +0300 Subject: =?UTF-8?B?UmU6IGZhc3RjZ2kgY2FjaGUgYmFja2dyb3VuZCB1cGRhdGUgc3NpINC/0L7QtNC3?= =?UTF-8?B?0LDQv9GA0L7RgdC+0LI=?= In-Reply-To: <08bad806d8496672f0c391b9221be703.NginxMailingListEnglish@forum.nginx.org> References: <08bad806d8496672f0c391b9221be703.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170510171846.GG94542@Romans-MacBook-Air.local> ?????? ????, On Wed, May 10, 2017 at 12:04:39PM -0400, metalfm1 wrote: > ???????????! > > ????????? fastcgi_cache_background_update ??????? ????? ???? ??? ssi > ???????????. > ???? ?????? ?? ??????? ?????? ???????, ??????? ???????? ??????????? 1 ???, > ???? nginx + php-fpm. ? ???? ??????????? ???????? ???????? ???? ?????? > ??????? ????????? ?????? ??????? ????? ???????? ? ????????? ssi ????????? ? > ?????????? ??? ?? 1 ???. ???????????? ????????? fastcgi ?????? ?? php ? > ??????? ????????? Cache-Control. > > ? ??????????? ??????? ???, nginx ??????? ???????? ?????????? /ssi_dev/ ? > ?????????? ?? ?? ????. ???????? ?????????? ????? ??? ?????????. > > ??????? ????????? nginx > - ???? ???? ??????? ? ????, ?? ?? ??????? ????????(HIT) > - ???? ???? ??????? ? ????, ?? ?? ???????, ?? ??????? ???????? ?????????? > ??????(STALE) ? ???????? ????????? ?? ??????? ????(EXPIRED) > > ???????? ??????????? ? ???, ??? ????????? ?? ??????? ???? ??????????? ? > ??????????? ??????. ?? ???? ???????? ?????? ???? ?????????? ??????????. > ????????? ???? ???????? ?? ???????????, ???? ? ??? ?????? ??? ????????, > ????????? nginx ????????????? ????????????. ??????? ???????? ?????? ?????? > ???????? ? ???????? ????????????? ????????? ?? ??????????. ?? ??????? ?????? background update ?????????? ???, ??? ?? ????????? ???????? ??????, ???? ??????? ? ??????????. ??? ??? ??? ??? ??????. ? ???? https://trac.nginx.org/nginx/ticket/1249 ? ??????? ????, ??????? ?????? ??? ????????. [..] -- Roman Arutyunyan From arut at nginx.com Wed May 10 17:25:02 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 10 May 2017 20:25:02 +0300 Subject: How can I get nginx-rtmp-module stats for all workers? In-Reply-To: <359AACDE-201B-4B21-A8BA-C61F70176803@tango.me> References: <359AACDE-201B-4B21-A8BA-C61F70176803@tango.me> Message-ID: <20170510172502.GH94542@Romans-MacBook-Air.local> Hello, On Wed, May 10, 2017 at 03:33:26PM +0000, Alexander Chekalin wrote: > Hello, > > I?ve set up nginx with nginx-rtmp-module as a NGINX-based Media Streaming Server, and it works fine, but now I try to get stats for this server. > > General approach is to use > > location /stat { rtmp_stat all; } > > which produce xml that can be parsed nicely, but my suspicions are that I got stats for one worker random worker on each poll (not accumulated stats for all workers). The host itself is 4-cores, so I have 4 workers running and I think I got confusing stats since each poll brings me stats for random worker. > > Is there any way I can use to get cumulated stats for all workers on host? Yes, RTMP statistics is only available for a single worker. For multi-worker statictics there was a patch "per-worker-listener" at https://github.com/arut/nginx-patches. However, this solution is not perfect. -- Roman Arutyunyan From rpaprocki at fearnothingproductions.net Thu May 11 00:40:55 2017 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 10 May 2017 17:40:55 -0700 Subject: upstream keepalive connections for all servers or each server? In-Reply-To: <20170510121949.GC55433@mdounin.ru> References: <9956402d4515be8867f8bc922c045fe2.NginxMailingListEnglish@forum.nginx.org> <20170510121949.GC55433@mdounin.ru> Message-ID: > > > It is the number of keepalive connections to be cached for the > whole upstream{} block, that is, all servers. > Can we clarify the behavior for upstreams with duplicate server directives? Consider the following upstream foo { server 1.2.3.4:80; server 5.6.7.8:80; keepalive 32; } upstream bar { server 1.2.3.4:80; server 5.6.7.8:80; keepalive 32; } A max of 64 TCP connections will be kept open? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 11 00:57:24 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 May 2017 03:57:24 +0300 Subject: upstream keepalive connections for all servers or each server? In-Reply-To: References: <9956402d4515be8867f8bc922c045fe2.NginxMailingListEnglish@forum.nginx.org> <20170510121949.GC55433@mdounin.ru> Message-ID: <20170511005724.GO55433@mdounin.ru> Hello! On Wed, May 10, 2017 at 05:40:55PM -0700, Robert Paprocki wrote: > > It is the number of keepalive connections to be cached for the > > whole upstream{} block, that is, all servers. > > Can we clarify the behavior for upstreams with duplicate server directives? > Consider the following > > upstream foo { > server 1.2.3.4:80; > server 5.6.7.8:80; > > keepalive 32; > } > > upstream bar { > server 1.2.3.4:80; > server 5.6.7.8:80; > > keepalive 32; > } > > A max of 64 TCP connections will be kept open? Yes, in such a configuration each upstream block will cache up to 32 connections (per worker process). -- Maxim Dounin http://nginx.org/ From nginx-users-list at whyaskwhy.org Thu May 11 03:31:07 2017 From: nginx-users-list at whyaskwhy.org (deoren) Date: Wed, 10 May 2017 22:31:07 -0500 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: <20170509123719.GY10157@daoine.org> References: <20170509123719.GY10157@daoine.org> Message-ID: Michael, itpp2012, Francis, My apologies for the terribly short reply, but I wanted to go ahead and reply back and thank you for the detailed responses. I've looked over them briefly and plan to go back over them in detail soon. The takeaway appears to be that the best results come from a web application that is both aware of being called from a sub-URI and provides sufficient "hooks" to easily keep generated requests within the desired sub-URI. The tip re using the request url as a map variable is an interesting one. I've used the map directive before, but it is a weak spot for me. I'll do additional research in that direction. Many thanks again to all of you for taking the time to respond to my questions. From jason.bronx at gmail.com Thu May 11 04:02:36 2017 From: jason.bronx at gmail.com (Jason Bronnx) Date: Wed, 10 May 2017 21:02:36 -0700 Subject: Is it possible to analyze result and query a second server? Message-ID: Hi, Tried to search on this for a couple of hours, but had no luck, hoping you guys can help. I have a use case, that I need to proxy the request to server-A first, then if returns 200, then it'll query server-B and return that result. If it returned != 200, just return 404. Something like this: function pseudoCode() { if (server-a.process() == 200) { return server-b.process() } // Got a non-200 from server-a, just return 404 return 404 } Is there a straightforward way of doing this in nginx? Thanks! Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Thu May 11 05:42:02 2017 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 11 May 2017 07:42:02 +0200 Subject: Is it possible to analyze result and query a second server? In-Reply-To: References: Message-ID: <20170511074202.00000a41@none.at> Hi Jason. Am Wed, 10 May 2017 21:02:36 -0700 schrieb Jason Bronnx : > Hi, > > Tried to search on this for a couple of hours, but had no luck, > hoping you guys can help. > > I have a use case, that I need to proxy the request to server-A > first, then if returns 200, then it'll query server-B and return that > result. If it returned != 200, just return 404. Something like this: > > function pseudoCode() { > if (server-a.process() == 200) { > return server-b.process() > } > > // Got a non-200 from server-a, just return 404 > return 404 > } > > Is there a straightforward way of doing this in nginx? I assume you will be able to do this with perl/lua. http://nginx.org/en/docs/http/ngx_http_perl_module.html https://github.com/openresty/lua-nginx-module#readme > Thanks! > Jason Regards Aleks From achekalin at tango.me Thu May 11 09:42:35 2017 From: achekalin at tango.me (Alexander Chekalin) Date: Thu, 11 May 2017 09:42:35 +0000 Subject: How can I get nginx-rtmp-module stats for all workers? In-Reply-To: <20170510172502.GH94542@Romans-MacBook-Air.local> References: <359AACDE-201B-4B21-A8BA-C61F70176803@tango.me> <20170510172502.GH94542@Romans-MacBook-Air.local> Message-ID: Thank you for the patch (and your attention). The only thing I don?t like in the idea is that patch is against nginx source, right? Just in a case nginx will ever change the source (and the patch is a bit outdated, isn?t it?) we can see problems on patching newer versions. Just out of curiosity, is it possible to have one (accumulated) stat from all workers? I?m not a programmer by myself but I can try to find someone who can do that if the change is easy to implement. Or maybe it is possible to output stats for different workers without patching nginx code itself (e.g. maybe via different URIs)? Thank you again for your module and you attention for us your users! :) > On 10 May 2017, at 17:25, Roman Arutyunyan wrote: > > Hello, > > On Wed, May 10, 2017 at 03:33:26PM +0000, Alexander Chekalin wrote: >> Hello, >> >> I?ve set up nginx with nginx-rtmp-module as a NGINX-based Media Streaming Server, and it works fine, but now I try to get stats for this server. >> >> General approach is to use >> >> location /stat { rtmp_stat all; } >> >> which produce xml that can be parsed nicely, but my suspicions are that I got stats for one worker random worker on each poll (not accumulated stats for all workers). The host itself is 4-cores, so I have 4 workers running and I think I got confusing stats since each poll brings me stats for random worker. >> >> Is there any way I can use to get cumulated stats for all workers on host? > > Yes, RTMP statistics is only available for a single worker. > For multi-worker statictics there was a patch "per-worker-listener" at > https://url.serverdata.net/?aZyQRg2CGut2qgyHrdHxA3r5xEAceK79DVxYd3Bfe3feJf7HCqpY0DPDS06nZJMU0e8eXTfFB-VA1xp75D9zo3w~~. > However, this solution is not perfect. > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > https://url.serverdata.net/?a3BrMch5W140wejaZNkOqCNQqLTtIMrfZvjOpEFvhllkxqKu1_sY819idHe0COINGIHy8TOVEd5aW7nX9Av-rDw~~ From arut at nginx.com Thu May 11 11:09:42 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 May 2017 14:09:42 +0300 Subject: Is it possible to analyze result and query a second server? In-Reply-To: References: Message-ID: <20170511110942.GA48854@Romans-MacBook-Air.local> Hi Jason, On Wed, May 10, 2017 at 09:02:36PM -0700, Jason Bronnx wrote: > Hi, > > Tried to search on this for a couple of hours, but had no luck, hoping you > guys can help. > > I have a use case, that I need to proxy the request to server-A first, then > if returns 200, then it'll query server-B and return that result. If it > returned != 200, just return 404. Something like this: > > function pseudoCode() { > if (server-a.process() == 200) { > return server-b.process() > } > > // Got a non-200 from server-a, just return 404 > return 404 > } > > Is there a straightforward way of doing this in nginx? Take a look at ngx_http_auth_request_module: http://nginx.org/en/docs/http/ngx_http_auth_request_module.html -- Roman Arutyunyan From arut at nginx.com Thu May 11 11:36:24 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 11 May 2017 14:36:24 +0300 Subject: How can I get nginx-rtmp-module stats for all workers? In-Reply-To: References: <359AACDE-201B-4B21-A8BA-C61F70176803@tango.me> <20170510172502.GH94542@Romans-MacBook-Air.local> Message-ID: <20170511113624.GB48854@Romans-MacBook-Air.local> Hi, On Thu, May 11, 2017 at 09:42:35AM +0000, Alexander Chekalin wrote: > Thank you for the patch (and your attention). The only thing I don?t like in the idea is that patch is against nginx source, right? Just in a case nginx will ever change the source (and the patch is a bit outdated, isn?t it?) we can see problems on patching newer versions. > > Just out of curiosity, is it possible to have one (accumulated) stat from all workers? I?m not a programmer by myself but I can try to find someone who can do that if the change is easy to implement. It does not look easy. Otherwise I would have done this long ago. Probably a solution is to store statistics in shared memory and make workers update it regularly. > Or maybe it is possible to output stats for different workers without patching nginx code itself (e.g. maybe via different URIs)? Currently not. > Thank you again for your module and you attention for us your users! :) > > > > On 10 May 2017, at 17:25, Roman Arutyunyan wrote: > > > > Hello, > > > > On Wed, May 10, 2017 at 03:33:26PM +0000, Alexander Chekalin wrote: > >> Hello, > >> > >> I?ve set up nginx with nginx-rtmp-module as a NGINX-based Media Streaming Server, and it works fine, but now I try to get stats for this server. > >> > >> General approach is to use > >> > >> location /stat { rtmp_stat all; } > >> > >> which produce xml that can be parsed nicely, but my suspicions are that I got stats for one worker random worker on each poll (not accumulated stats for all workers). The host itself is 4-cores, so I have 4 workers running and I think I got confusing stats since each poll brings me stats for random worker. > >> > >> Is there any way I can use to get cumulated stats for all workers on host? > > > > Yes, RTMP statistics is only available for a single worker. > > For multi-worker statictics there was a patch "per-worker-listener" at > > https://url.serverdata.net/?aZyQRg2CGut2qgyHrdHxA3r5xEAceK79DVxYd3Bfe3feJf7HCqpY0DPDS06nZJMU0e8eXTfFB-VA1xp75D9zo3w~~. > > However, this solution is not perfect. > > > > -- > > Roman Arutyunyan > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > https://url.serverdata.net/?a3BrMch5W140wejaZNkOqCNQqLTtIMrfZvjOpEFvhllkxqKu1_sY819idHe0COINGIHy8TOVEd5aW7nX9Av-rDw~~ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From cumali.ceylan at gmail.com Thu May 11 11:48:02 2017 From: cumali.ceylan at gmail.com (Cumali Ceylan) Date: Thu, 11 May 2017 14:48:02 +0300 Subject: Page loading is very slow with ngx_http_auth_pam_module Message-ID: Hello, I built nginx with ngx_http_auth_pam_module, setup linux-pam for local passwords with pam_unix module and setup nginx to use this pam config. Linux-pam config file is below: auth sufficient pam_unix.so nullok account required pam_unix.so When I did this, loading of page is very slow. If i remove this config and simply setup nginx for basic authentication (with auth_basic), page loading is again turns to normal. Is there anyone who observed same thing with me ? Any information will be helpful. Kind regards, Cumali Ceylan From jiangmuhui at gmail.com Thu May 11 14:32:41 2017 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Thu, 11 May 2017 22:32:41 +0800 Subject: different Memory consumption for H1 and H2 Message-ID: Hi Recently, I did an experiment to test the memory consumption of nginx. I request a large static zip file. I explored the debug information of nginx. For H2, below is a part of the log, I noticed that every time server will allocate 65536 bytes, I increase the connection number, I noticed that the server's memory consumption will reach to a threshhold and then increase very slowly: 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 HEADERS frame 00000000026155F0 was sent 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame sent: 00000000026155F0 sid:1 bl:1 len:119 2017/05/11 04:54:20 [debug] 29451#0: *10499 http output filter "/image/test.zip?" 2017/05/11 04:54:20 [debug] 29451#0: *10499 http copy filter: "/image/test.zip?" 2017/05/11 04:54:20 [debug] 29451#0: *10499 malloc: 0000000002699A80:65536 2017/05/11 04:54:20 [debug] 29451#0: *10499 read: 14, 0000000002699A80, 65536, 0 2017/05/11 04:54:20 [debug] 29451#0: *10499 http postpone filter "/image/test.zip?" 0000000002616098 2017/05/11 04:54:20 [debug] 29451#0: *10499 write new buf t:1 f:1 0000000002699A80, pos 0000000002699A80, size: 65536 file: 0, size: 65536 2017/05/11 04:54:20 [debug] 29451#0: *10499 http write filter: l:0 f:1 s:65536 2017/05/11 04:54:20 [debug] 29451#0: *10499 http write filter limit 0 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 create DATA frame 00000000026155F0: len:1 flags:0 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame out: 00000000026155F0 sid:1 bl:0 len:1 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 9 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 1 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL to write: 138 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL_write: 138 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 DATA frame 00000000026155F0 was sent 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame sent: 00000000026155F0 sid:1 bl:0 len:1 2017/05/11 04:54:20 [debug] 29451#0: *10499 http write filter 00000000026160A8 2017/05/11 04:54:20 [debug] 29451#0: *10499 malloc: 00000000026A9A90:65536 For H/1.1, below is a part of the debug log, no malloc is noticed during the send file process. And even when I increase the connection number to a very large value, the result shows nginx's memory consumption is still very low. : 2017/05/11 22:29:06 [debug] 29451#0: *11015 http run request: "/image/test.zip?" 2017/05/11 22:29:06 [debug] 29451#0: *11015 http writer handler: "/image/test.zip?" 2017/05/11 22:29:06 [debug] 29451#0: *11015 http output filter "/image/test.zip?" 2017/05/11 22:29:06 [debug] 29451#0: *11015 http copy filter: "/image/test.zip?" 2017/05/11 22:29:06 [debug] 29451#0: *11015 http postpone filter "/image/test.zip?" 0000000000000000 2017/05/11 22:29:06 [debug] 29451#0: *11015 write old buf t:0 f:1 0000000000000000, pos 0000000000000000, size: 0 file: 72470952, size: 584002 2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter: l:1 f:0 s:584002 2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter limit 0 2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: @72470952 584002 2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: 260640 of 584002 @72470952 2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter 0000000002670F70 2017/05/11 22:29:06 [debug] 29451#0: *11015 http copy filter: -2 "/image/test.zip?" 2017/05/11 22:29:06 [debug] 29451#0: *11015 http writer output filter: -2, "/image/test.zip?" 2017/05/11 22:29:06 [debug] 29451#0: *11015 event timer: 3, old: 1494513006630, new: 1494513006763 Hope to get your comments and what are the difference of nginx's memory allocation mechanisms between HTTP/2.0 and HTTP/1.1. Many Thanks Regards Muhui -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 11 15:11:43 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 11 May 2017 18:11:43 +0300 Subject: different Memory consumption for H1 and H2 In-Reply-To: References: Message-ID: <20170511151143.GR55433@mdounin.ru> Hello! On Thu, May 11, 2017 at 10:32:41PM +0800, Muhui Jiang wrote: > Recently, I did an experiment to test the memory consumption of nginx. I > request a large static zip file. I explored the debug information of nginx. > > For H2, below is a part of the log, I noticed that every time server will > allocate 65536 bytes, I increase the connection number, I noticed that the > server's memory consumption will reach to a threshhold and then increase > very slowly: [...] > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http output filter > "/image/test.zip?" > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http copy filter: > "/image/test.zip?" > 2017/05/11 04:54:20 [debug] 29451#0: *10499 malloc: 0000000002699A80:65536 > 2017/05/11 04:54:20 [debug] 29451#0: *10499 read: 14, 0000000002699A80, > 65536, 0 [...] > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame out: > 00000000026155F0 sid:1 bl:0 len:1 > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 9 > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 1 > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL to write: 138 > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL_write: 138 > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 DATA frame > 00000000026155F0 was sent [...] > For H/1.1, below is a part of the debug log, no malloc is noticed during > the send file process. And even when I increase the connection number to a > very large value, the result shows nginx's memory consumption is still very > low. : [...] > 2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter limit 0 > 2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: @72470952 584002 > 2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: 260640 of 584002 [...] > Hope to get your comments and what are the difference of nginx's memory > allocation mechanisms between HTTP/2.0 and HTTP/1.1. Many Thanks The difference is due to sendfile(), which is used in case of plain HTTP, and can't be used with SSL-encrypted connections. HTTP/2 is normally used with SSL encryption, so it is usually not possible to use sendfile() with HTTP/2. When sendfile() is not available or switched off, nginx uses output_buffers (http://nginx.org/r/output_buffers) to read a file from disk, and then writes these buffers to the connection. When it is possible to use the sendfile(), nginx does not try to read contents of static files it returns, but simply calls sendfile(). This is usually most effecient approach , as it avoids additional buffers and copying between kernel space and userland. Unfortunately, it is not available when using HTTPS (including HTTP/2). -- Maxim Dounin http://nginx.org/ From peter_booth at me.com Thu May 11 16:16:16 2017 From: peter_booth at me.com (Peter Booth) Date: Thu, 11 May 2017 12:16:16 -0400 Subject: Can you migrate a web app available via '/' to a proxied sub-URI without modifying the web app? In-Reply-To: <20170509123719.GY10157@daoine.org> References: <20170509123719.GY10157@daoine.org> Message-ID: <5F81E58E-C6CF-4421-97C5-6132402307CE@me.com> There's "can you?" and there's "should you?" My attitude is that life is short, so I want to avoid building any opportunities to break. Imagine that you deploy your N web apps. There can be a real value in being able to access the web app directly when debugging, and avoiding the web server layer. (for example, if your web server is also a caching reverse proxy) That means that your web app emits relative links that are valid in the context of the app. Best of all is if they also work on web server without URL rewriting. What you're asking for is unreliably possible, but a bad idea. Imagine that you want to deploy a dev, QA, uat and demo version of the site - what urls would theee be? Peter Sent from my iPhone > On May 9, 2017, at 8:37 AM, Francis Daly wrote: > > On Mon, May 08, 2017 at 06:17:02PM -0500, deoren wrote: > > Hi there, > >> I'm still >> not completely clear on whether this is supported. > > nginx does the nginx side as much as it can. Whether it all works is > almost entirely down to the web app. > > You cannot reliably reverse-proxy an arbitrary web app to a different > place in the url hierarchy than that at which it believes it is installed. > > You can unreliably do it, if that is good enough for you. > >> #2) I'd like to move the web application to >> https://subdomain.example.com/sub-uri/ by setting up location block >> like so (spacing condensed for display purposes): >> >> location /flask-demo { >> root /var/www/passenger-python-flask-demo; >> proxy_set_header X-Forwarded-Proto $scheme; >> proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; >> proxy_set_header X-Forwarded-Host $host; >> proxy_pass http://127.0.0.1:3000/; >> } >> >> The trailing slash was added in an attempt to map >> /flask-demo/SOMETHING to /SOMETHING in the application's point of >> view. > > Just as a small point -- it is usually worth making sure that the numbers > of / characters at the end of the "proxy_pass" argument and at the end > of the "location" prefix value, are the same. > >> That works well for a test web app where everything is contained in >> a single file, but when the static assets are referenced by the HTML >> output the user's browser attempts to pull static content from '/' >> instead of '/flask-demo/'. > > I would suggest that the HTML output of the web app is wrong. > > It should never link to /static/styles.css, if it wants to be > reverse-proxied. > > It should link to static/styles.css or to ../static/styles.css or to > ../../static/styles.css or to whatever value is appropriate relative to > the current resource. > > If the web app did that, then reverse-proxying would probably Just Work. > > "Not wanting to be reverse-proxied" is a valid choice for a web app. And > you can choose not to use ones that make that choice. > >> I'm hoping there is a way to isolate each web application based on >> the initial location block match, thereby catching follow-up >> requests for static resources related to the first request and >> prepend the sub-URI. > > In http, every request is independent of every other request, unless > you do something to try to tie them together (such as with a Cookie). > > Stock nginx will receive a request for /static/styles.css, and will > handle it according to its configuration. > > If you want your nginx to receive a request for /static/styles.css, and > to do some processing based on the Referer: header or based on a Cookie: > header or something else, in order that the final processed request will > be /flask-demo/static/styles.css, then you will probably have to write > some code in one of the embedded languages in nginx.conf. > > I'm not aware of code like that existing. > >> I assume the answer is "no, this is not supported", but I wanted to > > If you write the code, you can make it work (unreliably) for you. > > It cannot work in stock nginx just using nginx.conf normal directives. > >> ask just to make sure I'm not overlooking something. Francis Daly's >> remarks on the "URL-Rewriting not working" thread that I've quoted >> from below seems to suggest it might be possible, but probably isn't >> worth the headache: > > Your outline above is that nginx should not do the translation in the > html sent to the client, but that nginx should interpret the following > requests based on something not directly specified in the request line. > > That is possibly harder than doing the translation in the html. > >>> That is, if you want the back-end / to correspond to >>> the front-end /x/, then if the back-end ever links to >>> something like /a, you will need that to become >>> translated to /x/a before it leaves the front-end. In >>> general, the front-end cannot do that translation. > > You can try to get the front-end to do that translation; it will also > be unreliable and probably inefficient. > > Basically, any text that the client might interpret as a url that starts > with http:// or https:// or / would potentially have to be modified. And > possibly some other text as well, depending on the exact contents. > >>> So you may find it easier to configure the back-end to >>> be (or to act as if it is) installed below /x/ directly. > > That is still true. > > If you won't do that, and if you have two web apps that each want to > be at /, you may be better off having two server{} blocks in nginx with > different server_name values. > >> I found the 'proxy_redirect' directive, but it doesn't appear to do >> what I'm looking for. > > proxy_redirect deals with http headers, not with the body content. > >> I've used nginx for years, but only in very basic configurations. >> This is something new to me and I'm struggling to wrap my head >> around it. Thank you for reading this and any advice you can offer. > > It's a http and url thing, rather than an nginx thing. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From wubingzheng at 163.com Fri May 12 05:24:14 2017 From: wubingzheng at 163.com (Wu Bingzheng) Date: Fri, 12 May 2017 13:24:14 +0800 (CST) Subject: proxy_upstream_next while no live upstreams In-Reply-To: <20170510144306.GJ55433@mdounin.ru> References: <20170510124519.GD55433@mdounin.ru> <4a23e096.ac39.15bf2c1c470.Coremail.wubingzheng@163.com> <20170510144306.GJ55433@mdounin.ru> Message-ID: <2170c0de.5fe1.15bfb1d524c.Coremail.wubingzheng@163.com> The last request before this 502 request is almost 20 minutes ago and its response code is 200. The proxy_next_upstream conf: proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; Here is the access log. The upstream server 192.168.0.6 is DOWN. The line-10 is the 502 request: 1 [03/May/2017:14:35:38 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.6:8181, 192.168.0.5:8181" 0.012 0.001, 0.011 2 [03/May/2017:14:35:38 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.013 0.013 3 [03/May/2017:14:54:30 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.206 0.206 4 [03/May/2017:15:03:08 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.154 0.154 5 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.6:8181, 192.168.0.5:8181" 0.012 0.000, 0.012 6 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.014 0.014 7 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.016 0.016 8 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.017 0.017 9 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.011 0.011 10 [03/May/2017:15:59:06 -0400] "POST /x/y HTTP/1.1" 502 "192.168.0.6:8181, test_backend" 0.000 0.000, 0.000 11 [03/May/2017:15:59:07 -0400] "POST /x/y HTTP/1.1" 200 "10.255.222.206:8181" 0.260 0.260 At 2017-05-10 22:43:07, "Maxim Dounin" wrote: >Hello! > >On Wed, May 10, 2017 at 10:27:16PM +0800, Wu Bingzheng wrote: > >> Maybe you miss something in Question 2. The server 192.168.0.5 never fails. >> I think nginx should not return 502 if there is at least one server never fails. >> Exactly speaking, the server never fails in the last 1 hour and the fail_timeout is the default 10 second. > >How do you know that the server never fails? > >The "no live upstreams" error indicate that it failed from nginx >point of view, and was considered unavailable. > >Note that "failure" might not be something specifically logged by >nginx, but a response with a specific http code you've configured >in proxy_next_upstream, see http://nginx.org/r/proxy_next_upstream. > >-- >Maxim Dounin >http://nginx.org/ >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From cas.xyz at googlemail.com Fri May 12 09:11:33 2017 From: cas.xyz at googlemail.com (J K) Date: Fri, 12 May 2017 11:11:33 +0200 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx Message-ID: I have created a website with Flask that is serving a Bokeh app on a Digital Ocean VPN. Everything worked fine until I secured the server with Let's Encrypt following this tutorial . In step 3 of the tutorial the Nginx configuration file is changed, which might be the crux of the problem I'm getting: When I go on the website, the Flask content is rendered perfectly. However, the Bokeh app is not running. In the Inspection Console I get the following Error (note that I hashed out the IP address of my website): Mixed Content: The page at 'https://example.com/company_abc/' was loaded over HTTPS, but requested an insecure script 'http://###.###.###.##:5006/company_abc/autoload.js?bokeh-autoload-element=f?aab19c633c95&bokeh-session-id=AvWhaYqOzsX0GZPOjTS5LX2M7Z6arzsBFBxCjb0Up2xP'. This request has been blocked; the content must be served over HTTPS. I understand that I might have to use a method called reverse proxying, which is described here . However, I wasn't able to get it to work. Does anybody have an idea how to solve this? A similar problem was described here . Here are my modified server files: '/etc/nginx/sites-available/default': upstream flask_siti { server 127.0.0.1:8118 fail_timeout=0;} server { listen 443 ssl; server_name example.com www.example.com; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_stapling on; ssl_stapling_verify on; add_header Strict-Transport-Security max-age=15768000; charset utf-8; client_max_body_size 75M; access_log /var/log/nginx/flask/access.log; error_log /var/log/nginx/flask/error.log; keepalive_timeout 5; location / { # checks for static file, if not found proxy to the app try_files $uri @proxy_to_app; } location @proxy_to_app { proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://flask_siti; }} server { listen 80; server_name example.com www.example.com; return 301 https://$host$request_uri;} '/etc/supervisor/conf.d/bokeh_serve.conf': [program:bokeh_serve] command=/opt/envs/virtual/bin/bokeh serve company_abc.py company_xyz.py --allow-websocket-origin=www.example.com --allow-websocket-origin=example.com --host=###.###.###.##:5006 --use-xheaders directory=/opt/webapps/flask_telemetry autostart=false autorestart=true startretries=3 user=nobody '/etc/supervisor/conf.d/flask.conf': [program:flask] command=/opt/envs/virtual/bin/gunicorn -b :8118 website_app:app directory=/opt/webapps/flask_telemetry user=nobody autostart=true autorestart=true redirect_stderr=true And here is my Flask app (Note that I hashed out security related info): from flask import Flaskfrom flask_sqlalchemy import SQLAlchemyfrom flask import render_template, request, redirect, url_forfrom flask_security import Security, SQLAlchemyUserDatastore, UserMixin, RoleMixin, login_required, roles_accepted, current_userfrom flask_security.decorators import anonymous_user_requiredfrom flask_security.forms import LoginFormfrom bokeh.embed import autoload_serverfrom bokeh.client import pull_sessionfrom wtforms import StringFieldfrom wtforms.validators import InputRequiredfrom werkzeug.contrib.fixers import ProxyFix app = Flask(__name__) app.config['SQLALCHEMY_DATABASE_URI'] = 'postgresql://###:###@localhost/telemetry' app.config['SECRET_KEY'] = '###' app.config['SECURITY_REGISTERABLE'] = True app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False app.config['SECURITY_USER_IDENTITY_ATTRIBUTES'] = 'username' app.config['SECURITY_POST_LOGIN_VIEW'] = '/re_direct' app.debug = True db = SQLAlchemy(app) # Define models roles_users = db.Table('roles_users', db.Column('user_id', db.Integer(), db.ForeignKey('user.id')), db.Column('role_id', db.Integer(), db.ForeignKey('role.id'))) class Role(db.Model, RoleMixin): id = db.Column(db.Integer(), primary_key=True) name = db.Column(db.String(80), unique=True) description = db.Column(db.String(255)) class User(db.Model, UserMixin): id = db.Column(db.Integer, primary_key=True) username = db.Column(db.String(255), unique=True) password = db.Column(db.String(255)) active = db.Column(db.Boolean()) confirmed_at = db.Column(db.DateTime()) roles = db.relationship('Role', secondary=roles_users, backref=db.backref('users', lazy='dynamic')) class ExtendedLoginForm(LoginForm): email = StringField('Username', [InputRequired()]) # Setup Flask-Security user_datastore = SQLAlchemyUserDatastore(db, User, Role) security = Security(app, user_datastore, login_form=ExtendedLoginForm) # Views at app.route('/')@anonymous_user_requireddef index(): return render_template('index.html') @app.route('/re_direct/')@login_requireddef re_direct(): identifier = current_user.username print(identifier) return redirect(url_for(identifier)) @app.route('/index/')@login_required at roles_accepted('admin')def admin(): return render_template('admin.html') @app.route("/company_abc/")@login_required at roles_accepted('company_abc', 'admin')def company_abc(): url='http://###.###.###.##:5006' session=pull_session(url=url,app_path="/company_abc") bokeh_script=autoload_server(None,app_path="/company_abc",session_id=session.id,url=url) return render_template("company_abc.html", bokeh_script=bokeh_script) @app.route("/company_xyz/")@login_required at roles_accepted('company_xyz', 'admin')def company_xyz(): url='http://###.###.###.##:5006' session=pull_session(url=url,app_path="/company_xyz") bokeh_script=autoload_server(None,app_path="/company_xyz",session_id=session.id,url=url) return render_template("company_xyz.html", bokeh_script=bokeh_script) app.wsgi_app = ProxyFix(app.wsgi_app) if __name__ == '__main__': app.run() -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri May 12 10:25:41 2017 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 12 May 2017 12:25:41 +0200 Subject: =?UTF-8?B?UmU6IGZhc3RjZ2kgY2FjaGUgYmFja2dyb3VuZCB1cGRhdGUgc3NpINC/0L7QtNC3?= =?UTF-8?B?0LDQv9GA0L7RgdC+0LI=?= In-Reply-To: <20170510171846.GG94542@Romans-MacBook-Air.local> References: <08bad806d8496672f0c391b9221be703.NginxMailingListEnglish@forum.nginx.org> <20170510171846.GG94542@Romans-MacBook-Air.local> Message-ID: https://mailman.nginx.org/mailman/listinfo/nginx-ru --- *B. R.* 2017-05-10 19:18 GMT+02:00 Roman Arutyunyan : > ?????? ????, > > On Wed, May 10, 2017 at 12:04:39PM -0400, metalfm1 wrote: > > ???????????! > > > > ????????? fastcgi_cache_background_update ??????? ????? ???? ??? ssi > > ???????????. > > ???? ?????? ?? ??????? ?????? ???????, ??????? ???????? ??????????? 1 > ???, > > ???? nginx + php-fpm. ? ???? ??????????? ???????? ???????? ???? ?????? > > ??????? ????????? ?????? ??????? ????? ???????? ? ????????? ssi > ????????? ? > > ?????????? ??? ?? 1 ???. ???????????? ????????? fastcgi ?????? ?? php ? > > ??????? ????????? Cache-Control. > > > > ? ??????????? ??????? ???, nginx ??????? ???????? ?????????? /ssi_dev/ ? > > ?????????? ?? ?? ????. ???????? ?????????? ????? ??? ?????????. > > > > ??????? ????????? nginx > > - ???? ???? ??????? ? ????, ?? ?? ??????? ????????(HIT) > > - ???? ???? ??????? ? ????, ?? ?? ???????, ?? ??????? ???????? ?????????? > > ??????(STALE) ? ???????? ????????? ?? ??????? ????(EXPIRED) > > > > ???????? ??????????? ? ???, ??? ????????? ?? ??????? ???? ??????????? ? > > ??????????? ??????. ?? ???? ???????? ?????? ???? ?????????? ??????????. > > ????????? ???? ???????? ?? ???????????, ???? ? ??? ?????? ??? ????????, > > ????????? nginx ????????????? ????????????. ??????? ???????? ?????? > ?????? > > ???????? ? ???????? ????????????? ????????? ?? ??????????. > > ?? ??????? ?????? background update ?????????? ???, ??? ?? ????????? > ???????? > ??????, ???? ??????? ? ??????????. ??? ??? ??? ??? ??????. > ? ???? https://trac.nginx.org/nginx/ticket/1249 ? ??????? ????, ??????? > ?????? > ??? ????????. > > [..] > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Fri May 12 10:33:16 2017 From: r at roze.lv (Reinis Rozitis) Date: Fri, 12 May 2017 13:33:16 +0300 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx In-Reply-To: References: Message-ID: > I understand that I might have to use a method called reverse proxying, > which is described here. However, I wasn't able to get it to work. Well you already do this "method called reverse proxying" with the Flask app so you have to do the same with the Bokeh app as all modern/current browsers require all resources on a HTTPS website to be also loaded through secure channels. > Does anybody have an idea how to solve this? A similar problem was > described here. You can basically copy the configuration from the SO thread you linked (obviously you can change the location names as you wish / they just need to match): In nginx add: location /bokeh/ { proxy_pass http://127.0.1.1:5006; # .. with the rest of directives } relaunch the Bokeh app with --prefix=/bokeh/ and (if takes part in the url construction rather than application background requests) change the url variable in the Flask app url='http://###.###.###.##:5006' to url='https://yourserver/bokeh/' or even just relative url='/bokeh/' .. (I'm not familiar with this software stack so you have to test yourself) rr From mdounin at mdounin.ru Fri May 12 10:39:12 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 12 May 2017 13:39:12 +0300 Subject: proxy_upstream_next while no live upstreams In-Reply-To: <2170c0de.5fe1.15bfb1d524c.Coremail.wubingzheng@163.com> References: <20170510124519.GD55433@mdounin.ru> <4a23e096.ac39.15bf2c1c470.Coremail.wubingzheng@163.com> <20170510144306.GJ55433@mdounin.ru> <2170c0de.5fe1.15bfb1d524c.Coremail.wubingzheng@163.com> Message-ID: <20170512103912.GT55433@mdounin.ru> Hello! On Fri, May 12, 2017 at 01:24:14PM +0800, Wu Bingzheng wrote: > > The last request before this 502 request is almost 20 minutes ago and its response code is 200. > > The proxy_next_upstream conf: > proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; > > Here is the access log. The upstream server 192.168.0.6 is DOWN. The line-10 is the 502 request: > > 1 [03/May/2017:14:35:38 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.6:8181, 192.168.0.5:8181" 0.012 0.001, 0.011 > 2 [03/May/2017:14:35:38 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.013 0.013 > 3 [03/May/2017:14:54:30 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.206 0.206 > 4 [03/May/2017:15:03:08 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.154 0.154 > 5 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.6:8181, 192.168.0.5:8181" 0.012 0.000, 0.012 > 6 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.014 0.014 > 7 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.016 0.016 > 8 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.017 0.017 > 9 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.011 0.011 > 10 [03/May/2017:15:59:06 -0400] "POST /x/y HTTP/1.1" 502 "192.168.0.6:8181, test_backend" 0.000 0.000, 0.000 > 11 [03/May/2017:15:59:07 -0400] "POST /x/y HTTP/1.1" 200 "10.255.222.206:8181" 0.260 0.260 Looking into response status code in access logs is not enough to understand if a server is up or down. For at least the following reasons: - there might be over requests currently in flight which are not yet logged; - errors may occur while sending response body, and hence status code will not show if there was an error. It is usually a good idea to look into error logs instead. -- Maxim Dounin http://nginx.org/ From cj.wijtmans at gmail.com Fri May 12 14:13:54 2017 From: cj.wijtmans at gmail.com (Jared Mulder) Date: Fri, 12 May 2017 16:13:54 +0200 Subject: Page loading is very slow with ngx_http_auth_pam_module In-Reply-To: References: Message-ID: are your shell logins slow as well? sometimes PAM behaves like this. Live long and prosper, Christ-Jan Wijtmans https://github.com/cjwijtmans http://facebook.com/cj.wijtmans http://twitter.com/cjwijtmans On Thu, May 11, 2017 at 1:48 PM, Cumali Ceylan wrote: > Hello, > > I built nginx with ngx_http_auth_pam_module, setup linux-pam for local > passwords with pam_unix module and setup nginx to use this pam config. > Linux-pam config file is below: > > auth sufficient pam_unix.so nullok > account required pam_unix.so > > When I did this, loading of page is very slow. If i remove this config and > simply setup nginx for basic authentication (with auth_basic), page loading > is again turns to normal. > > Is there anyone who observed same thing with me ? Any information will be > helpful. > > Kind regards, > Cumali Ceylan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cumali.ceylan at gmail.com Fri May 12 14:22:41 2017 From: cumali.ceylan at gmail.com (Cumali Ceylan) Date: Fri, 12 May 2017 17:22:41 +0300 Subject: Page loading is very slow with ngx_http_auth_pam_module In-Reply-To: References: Message-ID: <66266993-d231-cf59-d7eb-579fac7b61f9@gmail.com> No it seems good to me. In addition, I tried to install nscd upon a suggestion but it didn't make a difference. I can see that the nscd cahces the user in its log file. On 05/12/2017 05:13 PM, Jared Mulder wrote: > are your shell logins slow as well? sometimes PAM behaves like this. > > Live long and prosper, > > Christ-Jan Wijtmans > https://github.com/cjwijtmans > http://facebook.com/cj.wijtmans > http://twitter.com/cjwijtmans > > > On Thu, May 11, 2017 at 1:48 PM, Cumali Ceylan wrote: >> Hello, >> >> I built nginx with ngx_http_auth_pam_module, setup linux-pam for local >> passwords with pam_unix module and setup nginx to use this pam config. >> Linux-pam config file is below: >> >> auth sufficient pam_unix.so nullok >> account required pam_unix.so >> >> When I did this, loading of page is very slow. If i remove this config and >> simply setup nginx for basic authentication (with auth_basic), page loading >> is again turns to normal. >> >> Is there anyone who observed same thing with me ? Any information will be >> helpful. >> >> Kind regards, >> Cumali Ceylan >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From cas.xyz at googlemail.com Fri May 12 14:28:12 2017 From: cas.xyz at googlemail.com (J K) Date: Fri, 12 May 2017 16:28:12 +0200 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx Message-ID: > > Message: 2 > Date: Fri, 12 May 2017 13:33:16 +0300 > From: "Reinis Rozitis" > To: > Subject: Re: Reverse-proxying: Flask app with Bokeh server on Nginx > Message-ID: > Content-Type: text/plain; format=flowed; charset="UTF-8"; > reply-type=original > > > I understand that I might have to use a method called reverse proxying, > > which is described here. However, I wasn't able to get it to work. > > Well you already do this "method called reverse proxying" with the Flask > app > so you have to do the same with the Bokeh app as all modern/current > browsers > require all resources on a HTTPS website to be also loaded through secure > channels. > > > > Does anybody have an idea how to solve this? A similar problem was > > described here. > > You can basically copy the configuration from the SO thread you linked > (obviously you can change the location names as you wish / they just need > to > match): > > In nginx add: > > location /bokeh/ { > proxy_pass http://127.0.1.1:5006; > > # .. with the rest of directives > } > > relaunch the Bokeh app with > > --prefix=/bokeh/ > > and (if takes part in the url construction rather than application > background requests) change the url variable in the Flask app > > url='http://###.###.###.##:5006' > to > url='https://yourserver/bokeh/' > > or even just relative url='/bokeh/' .. (I'm not familiar with this software > stack so you have to test yourself) > > Thanks Reinis for you quick reply. I did the changes you suggested. 1. in '/etc/nginx/sites-available/default' I added a new location as follow: location /bokeh/ { proxy_pass http://127.0.0.1:5006; # you suggested 127.0. *1*.1, but I figured that was a typo proxy_redirect off; proxy_set_header Host $http_host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } 2. in '/etc/supervisor/conf.d/bokeh_serve.conf' I added --prefix=/bokeh/: [program:bokeh_serve] command=/opt/envs/virtual/bin/bokeh serve company_abc.py company_xyz.py geomorphix.py --prefix=/bokeh/ --allow-websocket-origin=www.example.com --allow-websocket-origin=example.com --host=138.197.132.46:5006 --use-xheaders directory=/opt/webapps/flask_telemetry autostart=false autorestart=true startretries=3 user=nobody 3. in the Flask app, I changed the URL to: url='https://138.197.132.46:5006/bokeh/' Now, when I open the app in the browser I get a 502 Bad Gateway, and the Flask log file says the following: raise IOError("Cannot pull session document because we failed to connect to the server (to start the server, try the 'bokeh serve' command)") IOError: Cannot pull session document because we failed to connect to the server (to start the server, try the 'bokeh serve' command) However, when I go to the page http://###.###.##.##:5006/bokeh/geomorphix, the app works fine. Do you have an idea what's going on? Cheers, Julian -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Fri May 12 15:26:39 2017 From: r at roze.lv (Reinis Rozitis) Date: Fri, 12 May 2017 18:26:39 +0300 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx In-Reply-To: References: Message-ID: <437D05EFD1A24D9292CCE7BE45B2127C@Neiroze> > 3. in the Flask app, I changed the URL > to:url='https://138.197.132.46:5006/bokeh/' > Now, when I open the app in the browser I get a 502 Bad Gateway, and the > Flask log file says the following: > raise IOError("Cannot pull session document because we failed to connect > to the server (to start the server, try the 'bokeh serve' command)") Well seems that the Flask app uses the url also for background requests. You can't mix 'https://' and :5006 port in same url - this way the request goes to port 5006 but it expects to be also encrypted but if I understand correctly bokeh doesn't support SSL. p.s. for best performance you could tweak that the Flask->bokeh requests go through http but for the html template/output sent to clients there is another variable or relative paths. rr From r at roze.lv Fri May 12 15:33:47 2017 From: r at roze.lv (Reinis Rozitis) Date: Fri, 12 May 2017 18:33:47 +0300 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx In-Reply-To: References: Message-ID: <24A35BED74E7436B9515F58950D01034@Neiroze> What I forgot to add you need to change the 'url' (note the domain part) to: url='https://yourdomain/bokeh/' by looking at your error messages it seems that the 'url' is also directly used for client requests (probably placed in the html templated) - which means you can't use plain IP because then the browser most likely will just generate a SSL certificate and domain mismatch. rr From francis at daoine.org Fri May 12 20:46:30 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 12 May 2017 21:46:30 +0100 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx In-Reply-To: References: Message-ID: <20170512204630.GC10157@daoine.org> On Fri, May 12, 2017 at 04:28:12PM +0200, J K via nginx wrote: Hi there, > > location /bokeh/ { > > proxy_pass http://127.0.1.1:5006; > > > > # .. with the rest of directives > > } > > > > relaunch the Bokeh app with > > > > --prefix=/bokeh/ > > > > and (if takes part in the url construction rather than application > > background requests) change the url variable in the Flask app > > > > url='http://###.###.###.##:5006' > > to > > url='https://yourserver/bokeh/' > 1. in '/etc/nginx/sites-available/default' I added a new location as follow: > > location /bokeh/ { > > proxy_pass http://127.0.0.1:5006; # you suggested 127.0. > *1*.1, but I figured that was a typo The proxy_pass address should be wherever your "bokeh" http server is actually listening. Which probably means that whatever you use up there... > command=/opt/envs/virtual/bin/bokeh serve company_abc.py company_xyz.py > geomorphix.py --prefix=/bokeh/ --allow-websocket-origin=www.example.com > --allow-websocket-origin=example.com --host=138.197.132.46:5006 > --use-xheaders you should also use up there as --host. I suspect that making them both be 127.0.0.1 will be the easiest way of reverse-proxying things; but I also suspect that the "--allow-websocket-origin" part suggests that you may want to configure nginx to reverse proxy the web socket connection too. Notes are at http://nginx.org/en/docs/http/websocket.html It will be helpful to have a very clear picture of what talks to what, when things are working normally; that should make it easier to be confident that the same links are in place with nginx in the mix. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri May 12 21:25:03 2017 From: francis at daoine.org (Francis Daly) Date: Fri, 12 May 2017 22:25:03 +0100 Subject: Trailing Slash Redirect Loop Help In-Reply-To: References: <20170429123516.GP10157@daoine.org> Message-ID: <20170512212503.GD10157@daoine.org> On Wed, May 10, 2017 at 11:10:36AM -0400, Alex Med wrote: Hi there, > Yes, I am realizing that is a nightmare going against the trailing-slashed > directory nature. So I am going to have this rule take off slashes from > anything but directories. Do you have any suggestions as how to do it, but > without "if" There's a few possibly-useful questions to consider: Why do you want to do this? As in: what is the problem that you want to solve? Possibly there is a better approach than this. Also: given that you want to do this, why do you want to do this without "if"? Sometimes "if" is the correct tool to use. And: what's a file and what's a directory? Your initial config example used proxy_pass, which refers to remote urls, not files or directories. *This* nginx does not know whether an upstream url corresponds to a file or to something else. So that may want to be considered before designing the solution. Good luck with it, f -- Francis Daly francis at daoine.org From jiangmuhui at gmail.com Sat May 13 04:02:13 2017 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Sat, 13 May 2017 12:02:13 +0800 Subject: different Memory consumption for H1 and H2 In-Reply-To: <20170511151143.GR55433@mdounin.ru> References: <20170511151143.GR55433@mdounin.ru> Message-ID: HI Thanks for your great answer. you mentioned that sendfile() is to copy between kernel space and userland. I am curious, why this whole process don't need to malloc any memory? Could you please explain more on the detail implementation of the sendfile(). Many Thanks Regards Muhui 2017-05-11 23:11 GMT+08:00 Maxim Dounin : > Hello! > > On Thu, May 11, 2017 at 10:32:41PM +0800, Muhui Jiang wrote: > > > Recently, I did an experiment to test the memory consumption of nginx. I > > request a large static zip file. I explored the debug information of > nginx. > > > > For H2, below is a part of the log, I noticed that every time server will > > allocate 65536 bytes, I increase the connection number, I noticed that > the > > server's memory consumption will reach to a threshhold and then increase > > very slowly: > > [...] > > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http output filter > > "/image/test.zip?" > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http copy filter: > > "/image/test.zip?" > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 malloc: > 0000000002699A80:65536 > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 read: 14, 0000000002699A80, > > 65536, 0 > > [...] > > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2 frame out: > > 00000000026155F0 sid:1 bl:0 len:1 > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 9 > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL buf copy: 1 > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL to write: 138 > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 SSL_write: 138 > > 2017/05/11 04:54:20 [debug] 29451#0: *10499 http2:1 DATA frame > > 00000000026155F0 was sent > > [...] > > > For H/1.1, below is a part of the debug log, no malloc is noticed during > > the send file process. And even when I increase the connection number to > a > > very large value, the result shows nginx's memory consumption is still > very > > low. : > > [...] > > > 2017/05/11 22:29:06 [debug] 29451#0: *11015 http write filter limit 0 > > 2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: @72470952 584002 > > 2017/05/11 22:29:06 [debug] 29451#0: *11015 sendfile: 260640 of 584002 > > [...] > > > Hope to get your comments and what are the difference of nginx's memory > > allocation mechanisms between HTTP/2.0 and HTTP/1.1. Many Thanks > > The difference is due to sendfile(), which is used in case of > plain HTTP, and can't be used with SSL-encrypted connections. > HTTP/2 is normally used with SSL encryption, so it is usually not > possible to use sendfile() with HTTP/2. > > When sendfile() is not available or switched off, nginx uses > output_buffers (http://nginx.org/r/output_buffers) to read a file > from disk, and then writes these buffers to the connection. > > When it is possible to use the sendfile(), nginx does not try to > read contents of static files it returns, but simply calls > sendfile(). This is usually most effecient approach , as it > avoids additional buffers and copying between kernel space and > userland. Unfortunately, it is not available when using HTTPS > (including HTTP/2). > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From wubingzheng at 163.com Sat May 13 04:40:13 2017 From: wubingzheng at 163.com (Wu Bingzheng) Date: Sat, 13 May 2017 12:40:13 +0800 (CST) Subject: proxy_upstream_next while no live upstreams In-Reply-To: <20170512103912.GT55433@mdounin.ru> References: <20170510124519.GD55433@mdounin.ru> <4a23e096.ac39.15bf2c1c470.Coremail.wubingzheng@163.com> <20170510144306.GJ55433@mdounin.ru> <2170c0de.5fe1.15bfb1d524c.Coremail.wubingzheng@163.com> <20170512103912.GT55433@mdounin.ru> Message-ID: <4232a391.2234.15c001b6095.Coremail.wubingzheng@163.com> Because the last request before this 502-request was almost 20 minutes ago, so there was no error log in 20 minutes before this 502-request. This is some strange, and only happens very rarely. I know it's difficult to debug this if not reproduced. I just ask here to see if this is a known question. Thanks for your answer. Wu At 2017-05-12 18:39:12, "Maxim Dounin" wrote: >Hello! > >On Fri, May 12, 2017 at 01:24:14PM +0800, Wu Bingzheng wrote: > >> >> The last request before this 502 request is almost 20 minutes ago and its response code is 200. >> >> The proxy_next_upstream conf: >> proxy_next_upstream error timeout invalid_header http_502 http_503 http_504; >> >> Here is the access log. The upstream server 192.168.0.6 is DOWN. The line-10 is the 502 request: >> >> 1 [03/May/2017:14:35:38 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.6:8181, 192.168.0.5:8181" 0.012 0.001, 0.011 >> 2 [03/May/2017:14:35:38 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.013 0.013 >> 3 [03/May/2017:14:54:30 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.206 0.206 >> 4 [03/May/2017:15:03:08 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.154 0.154 >> 5 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.6:8181, 192.168.0.5:8181" 0.012 0.000, 0.012 >> 6 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.014 0.014 >> 7 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.016 0.016 >> 8 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.017 0.017 >> 9 [03/May/2017:15:40:51 -0400] "POST /x/y HTTP/1.1" 200 "192.168.0.5:8181" 0.011 0.011 >> 10 [03/May/2017:15:59:06 -0400] "POST /x/y HTTP/1.1" 502 "192.168.0.6:8181, test_backend" 0.000 0.000, 0.000 >> 11 [03/May/2017:15:59:07 -0400] "POST /x/y HTTP/1.1" 200 "10.255.222.206:8181" 0.260 0.260 > >Looking into response status code in access logs is not enough to >understand if a server is up or down. For at least the following >reasons: > >- there might be over requests currently in flight which are not > yet logged; > >- errors may occur while sending response body, and hence status > code will not show if there was an error. > >It is usually a good idea to look into error logs instead. > >-- >Maxim Dounin >http://nginx.org/ >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx From phil at pricom.com.au Sun May 14 22:43:14 2017 From: phil at pricom.com.au (Philip Rhoades) Date: Mon, 15 May 2017 08:43:14 +1000 Subject: Last roadblock changing from Apache: SSL & PHP Message-ID: People, If I can solve this last problem (that I have just spent all night on), I can completely replace Apache with Nginx. I am using RoundCubeMail as my Webmail client - it is written in PHP (the only PHP thing on my server) but it has been working happily with Apache for many years. I have RCM in an SSL protected directory: /home/ssl/webmail When I couldn't get that working I tried testing the setup with a simple: /home/ssl/index.php file that outputs PHP info (attached) - but I had exactly the same problem with that - a blank screen except for a green block cursor in the bottom right of the screen ie no text output in the browser and no errors in any of the logs. I also attach: /etc/nginx/conf.d/php-fpm.conf and: /etc/php-fpm.d/www.conf I would _really_ appreciate it if anyone could tell me what is wrong with my configuration . . (running on Fedora 25 x86_64). Thanks, Phil. -- Philip Rhoades PO Box 896 Cowra NSW 2794 Australia E-mail: phil at pricom.com.au -------------- next part -------------- A non-text attachment was scrubbed... Name: index.php Type: text/x-php Size: 24 bytes Desc: not available URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: www.conf URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: php-fpm.conf URL: From phil at pricom.com.au Sun May 14 22:50:06 2017 From: phil at pricom.com.au (Philip Rhoades) Date: Mon, 15 May 2017 08:50:06 +1000 Subject: Last roadblock changing from Apache: SSL & PHP #2 In-Reply-To: References: Message-ID: <435bf4b41de65944825d71fc860639a5@pricom.com.au> Also, nginx and php-fpm were actually running as services of course . . On 2017-05-15 08:43, Philip Rhoades wrote: > People, > > If I can solve this last problem (that I have just spent all night > on), I can completely replace Apache with Nginx. I am using > RoundCubeMail as my Webmail client - it is written in PHP (the only > PHP thing on my server) but it has been working happily with Apache > for many years. I have RCM in an SSL protected directory: > > /home/ssl/webmail > > When I couldn't get that working I tried testing the setup with a > simple: > > /home/ssl/index.php > > file that outputs PHP info (attached) - but I had exactly the same > problem with that - a blank screen except for a green block cursor in > the bottom right of the screen ie no text output in the browser and no > errors in any of the logs. > > I also attach: > > /etc/nginx/conf.d/php-fpm.conf > > and: > > /etc/php-fpm.d/www.conf > > I would _really_ appreciate it if anyone could tell me what is wrong > with my configuration . . (running on Fedora 25 x86_64). > > Thanks, > > Phil. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Philip Rhoades PO Box 896 Cowra NSW 2794 Australia E-mail: phil at pricom.com.au From lists-nginx at swsystem.co.uk Sun May 14 23:43:44 2017 From: lists-nginx at swsystem.co.uk (Steve Wilson) Date: Mon, 15 May 2017 00:43:44 +0100 Subject: Last roadblock changing from Apache: SSL & PHP In-Reply-To: References: Message-ID: <0794e5f7-5192-26f0-3d15-58e5d342ec18@swsystem.co.uk> Hi, It doesn't look like that's actually getting passed to php-fpm. You're possibly missing the php handling in your server{} block. Check that you've got a location set for php files to do a fastcgi_pass. eg. location ~ \.php$ { fastcgi_pass unix:/var/run/php-fpm/sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; client_body_timeout 300; include /etc/nginx/fastcgi_params; } The above is from one of my roundcube instances and makes sure that php files are processed by php. Steve On 14/05/2017 23:43, Philip Rhoades wrote: > People, > > If I can solve this last problem (that I have just spent all night on), > I can completely replace Apache with Nginx. I am using RoundCubeMail as > my Webmail client - it is written in PHP (the only PHP thing on my > server) but it has been working happily with Apache for many years. I > have RCM in an SSL protected directory: > > /home/ssl/webmail > > When I couldn't get that working I tried testing the setup with a simple: > > /home/ssl/index.php > > file that outputs PHP info (attached) - but I had exactly the same > problem with that - a blank screen except for a green block cursor in > the bottom right of the screen ie no text output in the browser and no > errors in any of the logs. > > I also attach: > > /etc/nginx/conf.d/php-fpm.conf > > and: > > /etc/php-fpm.d/www.conf > > I would _really_ appreciate it if anyone could tell me what is wrong > with my configuration . . (running on Fedora 25 x86_64). > > Thanks, > > Phil. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From rainer at ultra-secure.de Mon May 15 00:07:28 2017 From: rainer at ultra-secure.de (Rainer Duffner) Date: Mon, 15 May 2017 02:07:28 +0200 Subject: Last roadblock changing from Apache: SSL & PHP #2 In-Reply-To: <435bf4b41de65944825d71fc860639a5@pricom.com.au> References: <435bf4b41de65944825d71fc860639a5@pricom.com.au> Message-ID: <45A1CDC7-2C64-4FB9-8180-77FE80CAC414@ultra-secure.de> > Am 15.05.2017 um 00:50 schrieb Philip Rhoades : > > Also, nginx and php-fpm were actually running as services of course . . Maybe strip the comments next time you post a config file? I have: server { set_real_ip_from 127.0.0.12; real_ip_header X-Forwarded-For; listen 80; server_name bla ; root /usr/local/www/roundcube; index index.php index.html index.htm; access_log /var/log/nginx/bla_access.log; error_log /var/log/nginx/bla_error.log; location /roundcube { root /usr/local/www/roundcube ; try_files $uri $uri/ /index.php?q=$uri&$args; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/local/www//nginx-errors; } location ~ ^/(README.md|INSTALL|LICENSE|CHANGELOG|UPGRADING)$ { deny all; } location ~ ^/(config|temp|logs)/ { deny all; } location ~ /\. { deny all; access_log off; log_not_found off; } # pass the PHP scripts to FastCGI server listening on /var/run/fastcgi/www.sock location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/fastcgi/www.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } root at webmail:/usr/local/etc/nginx # cat fastcgi_params fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param SCRIPT_NAME $fastcgi_script_name; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param HTTPS $https if_not_empty; fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; # PHP only, required if PHP was built with --enable-force-cgi-redirect fastcgi_param REDIRECT_STATUS 200; fastcgi_keep_conn on; fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; Not chrooted, though, because it?s in a jail and I haven?t figured out how to setup all the fancy nullfs mounts in a jail. It?s behind a haproxy that distributes traffic between various jails - but that?s irrelevant for the current case. -------------- next part -------------- An HTML attachment was scrubbed... URL: From phil at pricom.com.au Mon May 15 00:34:54 2017 From: phil at pricom.com.au (Philip Rhoades) Date: Mon, 15 May 2017 10:34:54 +1000 Subject: Last roadblock changing from Apache: SSL & PHP - SUCCESS! In-Reply-To: <0794e5f7-5192-26f0-3d15-58e5d342ec18@swsystem.co.uk> References: <0794e5f7-5192-26f0-3d15-58e5d342ec18@swsystem.co.uk> Message-ID: <08de390897e6ad7194dcbbf56ae684ba@pricom.com.au> Steve, On 2017-05-15 09:43, Steve Wilson wrote: > Hi, > > It doesn't look like that's actually getting passed to php-fpm. > You're possibly missing the php handling in your server{} block. > Check that you've got a location set for php files to do a > fastcgi_pass. Isn't that what this does?: upstream php { server unix:/run/php-fpm/www.sock ; } server { . . fastcgi_pass php; } } > eg. > location ~ \.php$ { > fastcgi_pass unix:/var/run/php-fpm/sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > client_body_timeout 300; > include /etc/nginx/fastcgi_params; > } > > The above is from one of my roundcube instances and makes sure that php > files are processed by php. Yes! I added that one line: fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; and it started working! Many thanks! Regards, Phil. > Steve > > On 14/05/2017 23:43, Philip Rhoades wrote: >> People, >> >> If I can solve this last problem (that I have just spent all night >> on), >> I can completely replace Apache with Nginx. I am using RoundCubeMail >> as >> my Webmail client - it is written in PHP (the only PHP thing on my >> server) but it has been working happily with Apache for many years. I >> have RCM in an SSL protected directory: >> >> /home/ssl/webmail >> >> When I couldn't get that working I tried testing the setup with a >> simple: >> >> /home/ssl/index.php >> >> file that outputs PHP info (attached) - but I had exactly the same >> problem with that - a blank screen except for a green block cursor in >> the bottom right of the screen ie no text output in the browser and no >> errors in any of the logs. >> >> I also attach: >> >> /etc/nginx/conf.d/php-fpm.conf >> >> and: >> >> /etc/php-fpm.d/www.conf >> >> I would _really_ appreciate it if anyone could tell me what is wrong >> with my configuration . . (running on Fedora 25 x86_64). >> >> Thanks, >> >> Phil. >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Philip Rhoades PO Box 896 Cowra NSW 2794 Australia E-mail: phil at pricom.com.au From phil at pricom.com.au Mon May 15 00:35:40 2017 From: phil at pricom.com.au (Philip Rhoades) Date: Mon, 15 May 2017 10:35:40 +1000 Subject: Last roadblock changing from Apache: SSL & PHP #2 In-Reply-To: <45A1CDC7-2C64-4FB9-8180-77FE80CAC414@ultra-secure.de> References: <435bf4b41de65944825d71fc860639a5@pricom.com.au> <45A1CDC7-2C64-4FB9-8180-77FE80CAC414@ultra-secure.de> Message-ID: Rainer, On 2017-05-15 10:07, Rainer Duffner wrote: >> Am 15.05.2017 um 00:50 schrieb Philip Rhoades : >> Also, nginx and php-fpm were actually running as services of course >> . . > > Maybe strip the comments next time you post a config file? Ah . . good point. Thanks for your response. Regards, Phil. > I have: > > server { > set_real_ip_from 127.0.0.12; real_ip_header X-Forwarded-For; > listen 80; > server_name bla ; > root /usr/local/www/roundcube; > index index.php index.html index.htm; > access_log /var/log/nginx/bla_access.log; > error_log /var/log/nginx/bla_error.log; > location /roundcube { > root /usr/local/www/roundcube ; > try_files $uri $uri/ /index.php?q=$uri&$args; > } > error_page 404 /404.html; > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root /usr/local/www//nginx-errors; > } > location ~ ^/(README.md|INSTALL|LICENSE|CHANGELOG|UPGRADING)$ { > deny all; > } > location ~ ^/(config|temp|logs)/ { > deny all; > } > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > # pass the PHP scripts to FastCGI server listening on > /var/run/fastcgi/www.sock > location ~ \.php$ { > try_files $uri =404; > fastcgi_pass unix:/var/run/fastcgi/www.sock; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > } > } > > root at webmail:/usr/local/etc/nginx # cat fastcgi_params > > fastcgi_param QUERY_STRING $query_string; > fastcgi_param REQUEST_METHOD $request_method; > fastcgi_param CONTENT_TYPE $content_type; > fastcgi_param CONTENT_LENGTH $content_length; > > fastcgi_param SCRIPT_NAME $fastcgi_script_name; > fastcgi_param REQUEST_URI $request_uri; > fastcgi_param DOCUMENT_URI $document_uri; > fastcgi_param DOCUMENT_ROOT $document_root; > fastcgi_param SERVER_PROTOCOL $server_protocol; > fastcgi_param HTTPS $https if_not_empty; > > fastcgi_param GATEWAY_INTERFACE CGI/1.1; > fastcgi_param SERVER_SOFTWARE nginx/$nginx_version; > > fastcgi_param REMOTE_ADDR $remote_addr; > fastcgi_param REMOTE_PORT $remote_port; > fastcgi_param SERVER_ADDR $server_addr; > fastcgi_param SERVER_PORT $server_port; > fastcgi_param SERVER_NAME $server_name; > > # PHP only, required if PHP was built with --enable-force-cgi-redirect > fastcgi_param REDIRECT_STATUS 200; > > fastcgi_keep_conn on; > fastcgi_split_path_info ^(.+\.php)(.*)$; > fastcgi_param PATH_INFO $fastcgi_path_info; > fastcgi_param PATH_TRANSLATED $document_root$fastcgi_path_info; > > Not chrooted, though, because it?s in a jail and I haven?t figured > out how to setup all the fancy nullfs mounts in a jail. > > It?s behind a haproxy that distributes traffic between various jails > - but that?s irrelevant for the current case. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Philip Rhoades PO Box 896 Cowra NSW 2794 Australia E-mail: phil at pricom.com.au From thomas at glanzmann.de Mon May 15 06:16:38 2017 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Mon, 15 May 2017 08:16:38 +0200 Subject: nginx ssl_verify_client on leads to segmentation fault Message-ID: <20170515061638.GD6302@glanzmann.de> Hello, I'm running nginx from git HEAD, when I add the following two lines to a https server: ssl_client_certificate /tmp/ca.crt; ssl_verify_client on; and connect to the website, I get: 2017/05/15 08:12:04 [alert] 9109#0: worker process 12908 exited on signal 11 (core dumped) 2017/05/15 08:12:04 [alert] 9109#0: worker process 12909 exited on signal 11 (core dumped) 2017/05/15 08:12:10 [alert] 9109#0: worker process 12916 exited on signal 11 (core dumped) I enabled cores and get: (infra) [/tmp] gdb /local/nginx/sbin/nginx core Reading symbols from /local/nginx/sbin/nginx...done. [New LWP 12916] [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1". Core was generated by `nginx: worker process '. Program terminated with signal SIGSEGV, Segmentation fault. #0 0x00007fbd9b8653db in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 (gdb) bt #0 0x00007fbd9b8653db in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 #1 0x00007fbd9c5c2a16 in ngx_ssl_remove_cached_session (ssl=0x0, sess=0x7fbd9eb7ccf0) at src/event/ngx_event_openssl.c:2698 #2 0x00007fbd9c5d3633 in ngx_http_process_request (r=r at entry=0x7fbd9e67d6b0) at src/http/ngx_http_request.c:1902 #3 0x00007fbd9c5d3a2a in ngx_http_process_request_headers (rev=rev at entry=0x7fbd9eb0fa30) at src/http/ngx_http_request.c:1358 #4 0x00007fbd9c5d3ceb in ngx_http_process_request_line (rev=rev at entry=0x7fbd9eb0fa30) at src/http/ngx_http_request.c:1031 #5 0x00007fbd9c5d4092 in ngx_http_wait_request_handler (rev=0x7fbd9eb0fa30) at src/http/ngx_http_request.c:506 #6 0x00007fbd9c5d4142 in ngx_http_ssl_handshake_handler (c=0x7fbd9ec7b4c0) at src/http/ngx_http_request.c:814 #7 0x00007fbd9c5c1714 in ngx_ssl_handshake_handler (ev=) at src/event/ngx_event_openssl.c:1389 #8 0x00007fbd9c5beb6d in ngx_epoll_process_events (cycle=, timer=, flags=) at src/event/modules/ngx_epoll_module.c:902 #9 0x00007fbd9c5b6102 in ngx_process_events_and_timers (cycle=cycle at entry=0x7fbd9ec39cd0) at src/event/ngx_event.c:242 #10 0x00007fbd9c5bcdb4 in ngx_worker_process_cycle (cycle=cycle at entry=0x7fbd9ec39cd0, data=data at entry=0x2) at src/os/unix/ngx_process_cycle.c:749 #11 0x00007fbd9c5bb473 in ngx_spawn_process (cycle=cycle at entry=0x7fbd9ec39cd0, proc=0x7fbd9c5bcd3a , data=0x2, name=0x7fbd9c64b42d "worker process", respawn=respawn at entry=4) at src/os/unix/ngx_process.c:198 #12 0x00007fbd9c5bd818 in ngx_reap_children (cycle=0x7fbd9ec39cd0) at src/os/unix/ngx_process_cycle.c:621 #13 ngx_master_process_cycle (cycle=0x7fbd9ec39cd0) at src/os/unix/ngx_process_cycle.c:174 #14 0x00007fbd9c5988a0 in main (argc=, argv=) at src/core/nginx.c:375 I attached the ca.crt. It is a self signed with not all fields filled out. Please advice, if I should do any more testing. Cheers, Thomas -------------- next part -------------- -----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAJiHhD7iXgUPMA0GCSqGSIb3DQEBCwUAMEUxCzAJBgNV BAYTAkFVMRMwEQYDVQQIDApTb21lLVN0YXRlMSEwHwYDVQQKDBhJbnRlcm5ldCBX aWRnaXRzIFB0eSBMdGQwHhcNMTcwNTEyMTEzNTQyWhcNMjcwNTEwMTEzNTQyWjBF MQswCQYDVQQGEwJBVTETMBEGA1UECAwKU29tZS1TdGF0ZTEhMB8GA1UECgwYSW50 ZXJuZXQgV2lkZ2l0cyBQdHkgTHRkMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIB CgKCAQEAtEfckobyI1uk4n+rqJUiVjKhGt3e98zjGaAZQ49S1Lc+0ZRm5Pch9c7N koscg6UiR7xPIuGl6GeqRar6vsoSeLXK1ZOA2pEDgRznrISB2NC8kuNL/GQG+Qey VeVj+to/pi+y3zL7vSX68iM3L8Kn6Ekh5qlOA2f7Jf7ie8evlKx3uLIMiBEddpUz JJHcNLxIpqXHJbHziyXXrXdFvNm7P34/Qr0ZEu8wPj9qUJbMd/FQ3t5DCDgC5R6w 9P8Mb/yD8EXATRPf0z4LBUmomNvnYgI2azCrxciGwhrwj3w5BQl0Vz5h2tewQjMf clMkQKu5/6ATJ1SbMXNpLt+rBOPFyQIDAQABo1AwTjAdBgNVHQ4EFgQUBUoxjdMM JB989mnoEHmEnjOfQjAwHwYDVR0jBBgwFoAUBUoxjdMMJB989mnoEHmEnjOfQjAw DAYDVR0TBAUwAwEB/zANBgkqhkiG9w0BAQsFAAOCAQEAmBfdIoSWvxsrHeRoXSHR 4x/Ec/Y/UF9Zc42RouDhtki8MnFz2HY9BqpMpRY87ECEnTgqzoUEgQe2sd3B1fu8 sfKZ0VSxoWX6ltVK9oB+ThSe1bOQesNrzBjj42d+wHAfjNUBjEEpvmvClu2sl4XF vwxkRUvDh/zCdnCKp549fhjuBGZYy+I9ETgunyJ1+e7SD9zuMQhqra+HGABhAFs+ +us4gdQd8vB5SV4j0L1Ib+vjPWcO93Vybxtl2ispGt1WkzLYgtaYQ9KsAnP3LMoS lQeJC2ELGblpZxkA7Lpr8hfW5e9WzK1YhnOs9N2PgUEgVLPnsD2UNpBCQSHB7/Zz CQ== -----END CERTIFICATE----- From cas.xyz at googlemail.com Mon May 15 09:59:27 2017 From: cas.xyz at googlemail.com (J K) Date: Mon, 15 May 2017 11:59:27 +0200 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx Message-ID: > > Message: 4 > Date: Fri, 12 May 2017 18:26:39 +0300 > From: "Reinis Rozitis" > To: > Subject: Re: Re:Reverse-proxying: Flask app with Bokeh server on Nginx > Message-ID: <437D05EFD1A24D9292CCE7BE45B2127C at Neiroze> > Content-Type: text/plain; format=flowed; charset="UTF-8"; > reply-type=original > > > 3. in the Flask app, I changed the URL > > to:url='https://138.197.132.46:5006/bokeh/' > > Now, when I open the app in the browser I get a 502 Bad Gateway, and the > > Flask log file says the following: > > raise IOError("Cannot pull session document because we failed to connect > > to the server (to start the server, try the 'bokeh serve' command)") > > Well seems that the Flask app uses the url also for background requests. > > You can't mix 'https://' and :5006 port in same url - this way the > request > goes to port 5006 but it expects to be also encrypted but if I understand > correctly bokeh doesn't support SSL. > > > p.s. for best performance you could tweak that the Flask->bokeh requests go > through http but for the html template/output sent to clients there is > another variable or relative paths. > > > rr > > > > ------------------------------ > > Message: 5 > Date: Fri, 12 May 2017 18:33:47 +0300 > From: "Reinis Rozitis" > To: > Subject: Re: Re:Reverse-proxying: Flask app with Bokeh server on Nginx > Message-ID: <24A35BED74E7436B9515F58950D01034 at Neiroze> > Content-Type: text/plain; format=flowed; charset="UTF-8"; > reply-type=original > > What I forgot to add you need to change the 'url' (note the domain part) > to: > > url='https://yourdomain/bokeh/' > > by looking at your error messages it seems that the 'url' is also directly > used for client requests (probably placed in the html templated) - which > means you can't use plain IP because then the browser most likely will just > generate a SSL certificate and domain mismatch. > > > rr > Thanks for answering again. I followed your advise and change the Flask app script so that I have one URL to pull the Bokeh session and another one to create the HTML script: def company_abc(): url='http://127.0.0.1:5006/bokeh' session=pull_session(url=url,app_path="/company_abc") url_https='https://www.example.com' bokeh_script=autoload_server(None,app_path="/company_abc",session_id= session.id,url=url_https) return render_template("company_abc.html", bokeh_script=bokeh_script) This, however, results in the following error in Chrome: GET https://www.geomorphix.net/geomorphix/autoload.js?bokeh-autoload-element=dd ?6035f61fef5e&bokeh-session-id=hLR9QX79ofSg4yu7DZb1oHFdT14Ai7EcVCyh1iArcBf5 There's no other explanation. Both, Flask and Bokeh, log files don't contain error messages. > > > ------------------------------ > > Message: 6 > Date: Fri, 12 May 2017 21:46:30 +0100 > From: Francis Daly > To: J K via nginx > Cc: J K > Subject: Re: Reverse-proxying: Flask app with Bokeh server on Nginx > Message-ID: <20170512204630.GC10157 at daoine.org> > Content-Type: text/plain; charset=us-ascii > > On Fri, May 12, 2017 at 04:28:12PM +0200, J K via nginx wrote: > > Hi there, > > > > location /bokeh/ { > > > proxy_pass http://127.0.1.1:5006; > > > > > > # .. with the rest of directives > > > } > > > > > > relaunch the Bokeh app with > > > > > > --prefix=/bokeh/ > > > > > > and (if takes part in the url construction rather than application > > > background requests) change the url variable in the Flask app > > > > > > url='http://###.###.###.##:5006' > > > to > > > url='https://yourserver/bokeh/' > > > 1. in '/etc/nginx/sites-available/default' I added a new location as > follow: > > > > location /bokeh/ { > > > > proxy_pass http://127.0.0.1:5006; # you suggested > 127.0. > > *1*.1, but I figured that was a typo > > The proxy_pass address should be wherever your "bokeh" http server is > actually listening. > > Which probably means that whatever you use up there... > > > command=/opt/envs/virtual/bin/bokeh serve company_abc.py company_xyz.py > > geomorphix.py --prefix=/bokeh/ --allow-websocket-origin=www.example.com > > --allow-websocket-origin=example.com --host=138.197.132.46:5006 > > --use-xheaders > > you should also use up there as --host. > > I suspect that making them both be 127.0.0.1 will be the easiest > way of reverse-proxying things; but I also suspect that the > "--allow-websocket-origin" part suggests that you may want to configure > nginx to reverse proxy the web socket connection too. Notes are at > http://nginx.org/en/docs/http/websocket.html > > It will be helpful to have a very clear picture of what talks to what, > when things are working normally; that should make it easier to be > confident that the same links are in place with nginx in the mix. > > Hi Francis, Thanks for your answer! As you suggested, I did the following: 1. in '/etc/supervisor/conf.d/bokeh_serve.conf' I changed the host to 127.0.0.1: [program:bokeh_serve] command=/opt/envs/virtual/bin/bokeh serve company_abc.py --prefix=/bokeh/ --allow-websocket-origin=www.example.com --allow-websocket-origin= example.com --host=127.0.0.1:5006 --use-xheaders directory=/opt/webapps/flask_telemetry autostart=false autorestart=true startretries=3 user=nobody 2. I configure nginx to reverse proxy the web socket connection by adding the following lines to each location block in '/etc/nginx/sites-available/ default': proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; 3. In the Flask web app code I changed the URL of the route accordingly to 127.0.0.1: @app.route("/company_abc/") @login_required @roles_accepted('company_abc', 'admin') def geomorphix(): url='http://127.0.0.1:5006/bokeh' session=pull_session(url=url,app_path="/company_abc") bokeh_script=autoload_server(None,app_path="/geomorphix",session_id= session.id,url=url) return render_template("geomorphix.html", bokeh_script=bokeh_script) When I enter the website with the Bokeh script in my browser, I get a connection refused error: GET http://127.0.0.1:5006/bokeh/example/autoload.js?bokeh-autoload-element=? 9cf799610fb8&bokeh-session-id=8tvMFfJwtVFccTctGHIRPPsT3h6IF6nUFkJ8l6ZQALXl net::ERR_CONNECTION_REFUSED Looking at the log file of the Bokeh server, everything seems to be fine: 2017-05-15 08:56:19,267 Starting Bokeh server version 0.12.4 2017-05-15 08:56:19,276 Starting Bokeh server on port 5006 with applications at paths ['/company_abc'] 2017-05-15 08:56:19,276 Starting Bokeh server with process id: 28771 2017-05-15 08:56:24,530 WebSocket connection opened 2017-05-15 08:56:25,304 ServerConnection created Also the Flask log file shows no error: [2017-05-15 08:56:13 +0000] [28760] [INFO] Starting gunicorn 19.6.0 [2017-05-15 08:56:13 +0000] [28760] [INFO] Listening at: http://0.0.0.0:8118 (28760) [2017-05-15 08:56:13 +0000] [28760] [INFO] Using worker: sync [2017-05-15 08:56:13 +0000] [28765] [INFO] Booting worker with pid: 28765 The Nginx error log '/var/log/nginx/flask/error.log' is empty. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 15 10:54:47 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 15 May 2017 06:54:47 -0400 Subject: Reload of NGinX doesnt kill some of the older worker processes Message-ID: I am facing an issue where once I issued a reload to the NGinX binary, few of the older worker processes are not dying. They still remain orphaned. This is the configuration before issuing a reload : [poduser at ucfc2z3a-1582-lb8-nginx1 logs]$ ps -ef | grep nginx poduser 12540 22030 0 06:39 ? 00:00:00 nginx: worker process poduser 12541 22030 0 06:39 ? 00:00:00 nginx: worker process poduser 12762 11601 0 06:41 pts/0 00:00:00 grep nginx poduser 22030 1 1 May12 ? 00:49:01 nginx: master process /u01/app/Oracle_Nginx/sbin/nginx poduser 23528 22030 0 May12 ? 00:00:22 nginx: worker process poduser 24950 22030 0 May12 ? 00:00:22 nginx: worker process Configuration after issuing a relaod [poduser at ucfc2z3a-1582-lb8-nginx1 logs]$ ps -ef | grep nginx poduser 13280 22030 2 06:45 ? 00:00:00 nginx: worker process poduser 13281 22030 2 06:45 ? 00:00:00 nginx: worker process poduser 13323 11601 0 06:45 pts/0 00:00:00 grep nginx poduser 22030 1 1 May12 ? 00:49:02 nginx: master process /u01/app/Oracle_Nginx/sbin/nginx poduser 23528 22030 0 May12 ? 00:00:22 nginx: worker process poduser 24950 22030 0 May12 ? 00:00:22 nginx: worker process If you notice, there are two worker processes orphaned with PID's 23528 and 24950. Could someone please explain the cause for this, as to why few of the worker processes are orphaned? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274213,274213#msg-274213 From nginx-forum at forum.nginx.org Mon May 15 11:01:27 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Mon, 15 May 2017 07:01:27 -0400 Subject: upstream keepalive connections for all servers or each server? In-Reply-To: <20170511005724.GO55433@mdounin.ru> References: <20170511005724.GO55433@mdounin.ru> Message-ID: <6b7f7be0d580bb467d2b2fea27a40cbd.NginxMailingListEnglish@forum.nginx.org> nice. it's clear. thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274098,274214#msg-274214 From nginx-forum at forum.nginx.org Mon May 15 12:02:18 2017 From: nginx-forum at forum.nginx.org (Michael Corn) Date: Mon, 15 May 2017 08:02:18 -0400 Subject: behavior of cache manager in version 1.10.3 Message-ID: <8f1c5d9a9c41768432eca21fe154dfec.NginxMailingListEnglish@forum.nginx.org> Hi, The documentation for proxy_cache_path states: The data is removed in iterations configured by manager_files, manager_threshold, and manager_sleep parameters (1.11.5). I was wondering what the behavior of the cache manager was prior to release 1.11.5 (specifically, in version 1.10.3). How often does the cache manager wake up to clean? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274215,274215#msg-274215 From mdounin at mdounin.ru Mon May 15 12:36:55 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 May 2017 15:36:55 +0300 Subject: different Memory consumption for H1 and H2 In-Reply-To: References: <20170511151143.GR55433@mdounin.ru> Message-ID: <20170515123654.GY55433@mdounin.ru> Hello! On Sat, May 13, 2017 at 12:02:13PM +0800, Muhui Jiang wrote: > Thanks for your great answer. you mentioned that sendfile() is to copy > between kernel space and userland. I am curious, why this whole process > don't need to malloc any memory? Could you please explain more on the > detail implementation of the sendfile(). Many Thanks No, I said that sendfile() avoids copying between kernel space and userland. The sendfile() system call is to send a file to a socket. As such, it allows nginx to simply open the file requested by a client and then call sendfile() to instruct OS to transfer it to the socket. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon May 15 13:22:48 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Mon, 15 May 2017 09:22:48 -0400 Subject: worker_rlimit_nofile is for total of all worker processes or single worker process? Message-ID: <0b1de8aec598aa33749d9d47e31ceb3f.NginxMailingListEnglish@forum.nginx.org> Hello I'm confused if the worker_rlimit_nofile directive is for total of all worker processes or single worker process? As I know, the worker_connections is for single worker process. Let's say if I have two worker processes and have worker_connections 512, then should I set worker_rlimit_nofile to 512 or 1024? Thanks Xiaofeng Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274218,274218#msg-274218 From mdounin at mdounin.ru Mon May 15 13:40:21 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 May 2017 16:40:21 +0300 Subject: behavior of cache manager in version 1.10.3 In-Reply-To: <8f1c5d9a9c41768432eca21fe154dfec.NginxMailingListEnglish@forum.nginx.org> References: <8f1c5d9a9c41768432eca21fe154dfec.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170515134021.GA55433@mdounin.ru> Hello! On Mon, May 15, 2017 at 08:02:18AM -0400, Michael Corn wrote: > The documentation for proxy_cache_path states: > The data is removed in iterations configured by manager_files, > manager_threshold, and manager_sleep parameters (1.11.5). > > I was wondering what the behavior of the cache manager was prior to release > 1.11.5 (specifically, in version 1.10.3). Prior to changes in 1.11.5, cache manager removed all files it has to remove according to "inactive" and "max_size" configured. In some cases this caused responsiveness problems due to huge disk activity though (for example, if max_size was changed to a smaller value), and hence the change. Now cache manager sometimes sleeps even if there are cache items to remove, to avoid overloading of the IO subsystem. > How often does the cache manager wake up to clean? If cache manger has nothing to do, it sleeps till next expected expiration of a cache item as per "inactive", or for 10 seconds, whichever is sooner. That is, if there are no inactive cache items, "max_size" will be checked once in 10 seconds. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon May 15 14:02:19 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 May 2017 17:02:19 +0300 Subject: Reload of NGinX doesnt kill some of the older worker processes In-Reply-To: References: Message-ID: <20170515140219.GB55433@mdounin.ru> Hello! On Mon, May 15, 2017 at 06:54:47AM -0400, shivramg94 wrote: > I am facing an issue where once I issued a reload to the NGinX binary, few > of the older worker processes are not dying. They still remain orphaned. > > This is the configuration before issuing a reload : > > [poduser at ucfc2z3a-1582-lb8-nginx1 logs]$ ps -ef | grep nginx > poduser 12540 22030 0 06:39 ? 00:00:00 nginx: worker process > > poduser 12541 22030 0 06:39 ? 00:00:00 nginx: worker process > > poduser 12762 11601 0 06:41 pts/0 00:00:00 grep nginx > poduser 22030 1 1 May12 ? 00:49:01 nginx: master process > /u01/app/Oracle_Nginx/sbin/nginx > poduser 23528 22030 0 May12 ? 00:00:22 nginx: worker process > > poduser 24950 22030 0 May12 ? 00:00:22 nginx: worker process > > Configuration after issuing a relaod > > [poduser at ucfc2z3a-1582-lb8-nginx1 logs]$ ps -ef | grep nginx > poduser 13280 22030 2 06:45 ? 00:00:00 nginx: worker process > > poduser 13281 22030 2 06:45 ? 00:00:00 nginx: worker process > > poduser 13323 11601 0 06:45 pts/0 00:00:00 grep nginx > poduser 22030 1 1 May12 ? 00:49:02 nginx: master process > /u01/app/Oracle_Nginx/sbin/nginx > poduser 23528 22030 0 May12 ? 00:00:22 nginx: worker process > > poduser 24950 22030 0 May12 ? 00:00:22 nginx: worker process > > If you notice, there are two worker processes orphaned with PID's 23528 and > 24950. Could someone please explain the cause for this, as to why few of the > worker processes are orphaned? >From the "nginx: worker process" process titles it look like these processes were not notified they are expected to shut down. This may happen if notification fail for some reason - for example, due to inapropriate system limits. Try checking error logs, it may shed some light on what happended. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon May 15 15:18:15 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Mon, 15 May 2017 11:18:15 -0400 Subject: worker_rlimit_nofile is for total of all worker processes or single worker process? In-Reply-To: <0b1de8aec598aa33749d9d47e31ceb3f.NginxMailingListEnglish@forum.nginx.org> References: <0b1de8aec598aa33749d9d47e31ceb3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6945579db88c5ac71e5a94d5d188ce2b.NginxMailingListEnglish@forum.nginx.org> I read through the source codes and find the limit should be applied to each worker process. Right ? static void ngx_worker_process_init(ngx_cycle_t *cycle, ngx_int_t worker) { // ..... if (ccf->rlimit_nofile != NGX_CONF_UNSET) { rlmt.rlim_cur = (rlim_t) ccf->rlimit_nofile; rlmt.rlim_max = (rlim_t) ccf->rlimit_nofile; if (setrlimit(RLIMIT_NOFILE, &rlmt) == -1) { ngx_log_error(NGX_LOG_ALERT, cycle->log, ngx_errno, "setrlimit(RLIMIT_NOFILE, %i) failed", ccf->rlimit_nofile); } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274218,274227#msg-274227 From nginx-forum at forum.nginx.org Mon May 15 15:37:54 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 15 May 2017 11:37:54 -0400 Subject: Reload of NGinX doesnt kill some of the older worker processes In-Reply-To: <20170515140219.GB55433@mdounin.ru> References: <20170515140219.GB55433@mdounin.ru> Message-ID: Hi Maxim, This is what I could find in the error logs 2017/05/15 11:32:18 [notice] 21499#0: signal process started 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (88: Socket operation on non-socket) 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (32: Broken pipe) 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (88: Socket operation on non-socket) 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (32: Broken pipe) 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (88: Socket operation on non-socket) 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (32: Broken pipe) 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (88: Socket operation on non-socket) 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (32: Broken pipe) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274213,274229#msg-274229 From nginx-forum at forum.nginx.org Mon May 15 15:41:34 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 15 May 2017 11:41:34 -0400 Subject: Reload of NGinX doesnt kill some of the older worker processes In-Reply-To: References: <20170515140219.GB55433@mdounin.ru> Message-ID: At times, the error logs say 2017/05/15 11:37:01 [notice] 22229#0: signal process started 2017/05/15 11:37:02 [alert] 22030#0: sendmsg() failed (32: Broken pipe) 2017/05/15 11:37:02 [alert] 22030#0: sendmsg() failed (32: Broken pipe) 2017/05/15 11:37:04 [alert] 22030#0: sendmsg() failed (9: Bad file descriptor) 2017/05/15 11:37:04 [alert] 22030#0: sendmsg() failed (32: Broken pipe) 2017/05/15 11:37:04 [alert] 22030#0: sendmsg() failed (9: Bad file descriptor) 2017/05/15 11:37:04 [alert] 22030#0: sendmsg() failed (32: Broken pipe) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274213,274230#msg-274230 From mdounin at mdounin.ru Mon May 15 16:20:09 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 May 2017 19:20:09 +0300 Subject: nginx ssl_verify_client on leads to segmentation fault In-Reply-To: <20170515061638.GD6302@glanzmann.de> References: <20170515061638.GD6302@glanzmann.de> Message-ID: <20170515162009.GE55433@mdounin.ru> Hello! On Mon, May 15, 2017 at 08:16:38AM +0200, Thomas Glanzmann wrote: > Hello, > I'm running nginx from git HEAD, when I add the following two lines to a > https server: > > ssl_client_certificate /tmp/ca.crt; > ssl_verify_client on; > > and connect to the website, I get: > > 2017/05/15 08:12:04 [alert] 9109#0: worker process 12908 exited on signal 11 (core dumped) > 2017/05/15 08:12:04 [alert] 9109#0: worker process 12909 exited on signal 11 (core dumped) > 2017/05/15 08:12:10 [alert] 9109#0: worker process 12916 exited on signal 11 (core dumped) [...] > (gdb) bt > #0 0x00007fbd9b8653db in ?? () from /usr/lib/x86_64-linux-gnu/libssl.so.1.0.0 > #1 0x00007fbd9c5c2a16 in ngx_ssl_remove_cached_session (ssl=0x0, sess=0x7fbd9eb7ccf0) at src/event/ngx_event_openssl.c:2698 > #2 0x00007fbd9c5d3633 in ngx_http_process_request (r=r at entry=0x7fbd9e67d6b0) at src/http/ngx_http_request.c:1902 Could you please confirm you do _not_ have ssl_certificate defined in the server block where you've added ssl_verify_client? I was able to reproduce the problem with the following configuration: server{ listen 8443 ssl; ssl_certificate test.crt; ssl_certificate_key test.key; } server { listen 8443 ssl; server_name foo; ssl_verify_client on; ssl_client_certificate test.root; } (Just in case, an obvious workaround would be to add ssl_certificate to the second server.) Here is a patch: # HG changeset patch # User Maxim Dounin # Date 1494865081 -10800 # Mon May 15 19:18:01 2017 +0300 # Node ID 26c5ec160d3cb89ec681c285a3d87cae0595cb9e # Parent c85782291153482fe126e20ebc13b30eca4139ee SSL: fixed context when removing cached sessions. When removing cached session due to client certificate verification failure, we have to use c->ssl->session_ctx, which matches the SSL context used by OpenSSL when reusing sessions (see 97f102a13f33). Using a context from currently selected virtual server is wrong, as it may be different from the session context. Moreover, it may be NULL if there is no certificate defined in the currently selected virtual server, leading to a segmentation fault. Reported by Thomas Glanzmann. diff --git a/src/http/ngx_http_request.c b/src/http/ngx_http_request.c --- a/src/http/ngx_http_request.c +++ b/src/http/ngx_http_request.c @@ -1885,7 +1885,7 @@ ngx_http_process_request(ngx_http_reques "client SSL certificate verify error: (%l:%s)", rc, X509_verify_cert_error_string(rc)); - ngx_ssl_remove_cached_session(sscf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); ngx_http_finalize_request(r, NGX_HTTPS_CERT_ERROR); @@ -1899,7 +1899,7 @@ ngx_http_process_request(ngx_http_reques ngx_log_error(NGX_LOG_INFO, c->log, 0, "client sent no required SSL certificate"); - ngx_ssl_remove_cached_session(sscf->ssl.ctx, + ngx_ssl_remove_cached_session(c->ssl->session_ctx, (SSL_get0_session(c->ssl->connection))); ngx_http_finalize_request(r, NGX_HTTPS_NO_CERT); -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon May 15 16:25:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 May 2017 19:25:57 +0300 Subject: Reload of NGinX doesnt kill some of the older worker processes In-Reply-To: References: <20170515140219.GB55433@mdounin.ru> Message-ID: <20170515162557.GF55433@mdounin.ru> Hello! On Mon, May 15, 2017 at 11:37:54AM -0400, shivramg94 wrote: > Hi Maxim, > > This is what I could find in the error logs > > 2017/05/15 11:32:18 [notice] 21499#0: signal process started > 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (88: Socket operation > on non-socket) > 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (32: Broken pipe) > 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (88: Socket operation > on non-socket) > 2017/05/15 11:32:19 [alert] 22030#0: sendmsg() failed (32: Broken pipe) > 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (88: Socket operation > on non-socket) > 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (32: Broken pipe) > 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (88: Socket operation > on non-socket) > 2017/05/15 11:32:20 [alert] 22030#0: sendmsg() failed (32: Broken pipe) So, clearly comminication between master process and these workers is now broken. You have to look for earlier errors to find out what exactly caused this - likely it hit resource limit on number of open files or something like this. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon May 15 16:41:37 2017 From: nginx-forum at forum.nginx.org (shivramg94) Date: Mon, 15 May 2017 12:41:37 -0400 Subject: Reload of NGinX doesnt kill some of the older worker processes In-Reply-To: <20170515162557.GF55433@mdounin.ru> References: <20170515162557.GF55433@mdounin.ru> Message-ID: <23793c8687a021c4a6ced138868adc70.NginxMailingListEnglish@forum.nginx.org> Earlier, it says the pid file doesn't exist even though the master and worker processes were running. 2017/05/12 15:35:41 [notice] 19042#0: signal process started 2017/05/12 15:35:41 [error] 19042#0: open() "/u01/data/logs/nginx.pid" failed (2: No such file or directory) Can the above issue ( where the nginx.pid file went missing) and the communication break up between the master and the worker processes be correlated? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274213,274233#msg-274233 From mdounin at mdounin.ru Mon May 15 16:58:57 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 15 May 2017 19:58:57 +0300 Subject: Reload of NGinX doesnt kill some of the older worker processes In-Reply-To: <23793c8687a021c4a6ced138868adc70.NginxMailingListEnglish@forum.nginx.org> References: <20170515162557.GF55433@mdounin.ru> <23793c8687a021c4a6ced138868adc70.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170515165856.GG55433@mdounin.ru> Hello! On Mon, May 15, 2017 at 12:41:37PM -0400, shivramg94 wrote: > Earlier, it says the pid file doesn't exist even though the master and > worker processes were running. > > > 2017/05/12 15:35:41 [notice] 19042#0: signal process started > 2017/05/12 15:35:41 [error] 19042#0: open() "/u01/data/logs/nginx.pid" > failed (2: No such file or directory) > > Can the above issue ( where the nginx.pid file went missing) and the > communication break up between the master and the worker processes be > correlated? This message is from signal process, 19042, and indicate only that "nginx -s" was used incorrectly (likely with wrong configuration). And not, this is not something that can break master / worker communication. Look for 'alert' and 'crit' messages. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue May 16 04:28:11 2017 From: nginx-forum at forum.nginx.org (Moji55) Date: Tue, 16 May 2017 00:28:11 -0400 Subject: Not having resume ability on secure links Message-ID: <41a91f8188fde38ffd518d4c6356adf8.NginxMailingListEnglish@forum.nginx.org> Hello my friends my problem is that i have resume ability on direct link but when change it to secure link this ability not working, for example : Direct link http://www.mydomain.com/uploads/myfolder/1.rar change to Secure link http://www.mydomain.com/vfm-admin/vfm-downloader.php?q=dXBsb2Fkcy8xMS4yMi42My8xLnJhcg==&h=4737ee045e8b3b9be5d1fa4caf7de8e9&sh=7774b265a13e0664872a2bbf9d00b40f What do you think? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274254,274254#msg-274254 From nginx-forum at forum.nginx.org Tue May 16 04:45:50 2017 From: nginx-forum at forum.nginx.org (Michael Corn) Date: Tue, 16 May 2017 00:45:50 -0400 Subject: behavior of cache manager in version 1.10.3 In-Reply-To: <20170515134021.GA55433@mdounin.ru> References: <20170515134021.GA55433@mdounin.ru> Message-ID: <8e256d5e6466552b10d8e36fc7942976.NginxMailingListEnglish@forum.nginx.org> Thanks. One more question relating to cache cleaning. If I use version 1.11.5 or greater, and I set manager_sleep to a small number, say 50ms, And I set use_temp_path=off. Now, I start receiving a large file from the upstream, let's say it will take 10 seconds to receive it. Will the cache_manager see the cache exceeding its max_size before the entire new file has been received and start cleaning up older entries from the cache in parallel to the new file being received? Or will it not take the size of this new file into account until it has been completely received? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274215,274255#msg-274255 From nginx-forum at forum.nginx.org Tue May 16 05:47:32 2017 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Tue, 16 May 2017 01:47:32 -0400 Subject: Not having resume ability on secure links In-Reply-To: <41a91f8188fde38ffd518d4c6356adf8.NginxMailingListEnglish@forum.nginx.org> References: <41a91f8188fde38ffd518d4c6356adf8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <43487b7d89c0103bca11e9d0aa588d3b.NginxMailingListEnglish@forum.nginx.org> Use Nginx built in secure link module the link you provided is being generated and served by PHP. ".com/vfm-admin/vfm-downloader.php?q=" Nginx's secure link module will resume downloads and support pseudo streaming etc but you will find it is PHP that does not. Change your setup and modify your PHP code to not push the download through PHP but generate a link with a salted hash sum that Nginx may use. http://nginx.org/en/docs/http/ngx_http_secure_link_module.html Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274254,274256#msg-274256 From nginx-forum at forum.nginx.org Tue May 16 07:10:47 2017 From: nginx-forum at forum.nginx.org (ajmalahd) Date: Tue, 16 May 2017 03:10:47 -0400 Subject: UDP Load balancer does not scale Message-ID: <123a6229c3082e727aa470c23af6c4f6.NginxMailingListEnglish@forum.nginx.org> Hi I am trying to set up a UDP load balancer using Nginx. Initially, I configured 4 usptream servers with two server processes running on each of them. It gave a throughput of around 24000 query per second when tested with dnsperf. When I try to add two more upstreams servers, the throughput is not increasing as expected. In fact, it deteriorates to the range of 5000 query per second with the following error: [warn] 5943#0: *10433175 upstream server temporarily disabled while proxying connection, udp client: xxx.xxx.xxx.29, server: 0.0.0.0:53, upstream: "xxx.xxx.xxx.224:53", bytes from/to client:80/0, bytes from/to upstream:0/80 [error] 5943#0: *10085077 no live upstreams while connecting to upstream, udp client: xxx.xxx.xxx.224, server: 0.0.0.0:53, upstream: "dns_upstreams", bytes from/to client:80/0, bytes from/to upstream:0/0 I understood that the above error appears when Nginx doesn't receive responses from upstream on time, and it is marked as unavailable temporarily. I used to get this error before even with 4 upstream servers, but after adding the following additional configuration, it had got resolved: user nginx; worker_processes 4; worker_rlimit_nofile 65535; load_module "/usr/lib64/nginx/modules/ngx_stream_module.so"; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 10240; } stream { upstream dns_upstreams { server xxx.xxx.xxx.0:53 max_fails=2000 fail_timeout=30s; server xxx.xxx.xxx.0:6363 max_fails=2000 fail_timeout=0s; server xxx.xxx.xxx.187:53 max_fails=2000 fail_timeout=30s; server xxx.xxx.xxx.187:6363 max_fails=2000 fail_timeout=30s; server xxx.xxx.xxx.183:53 max_fails=2000 fail_timeout=30s; server xxx.xxx.xxx.183:6363 max_fails=2000 fail_timeout=30s; server xxx.xxx.xxx.212:53 max_fails=2000 fail_timeout=30s; server xxx.xxx.xxx.212:6363 max_fails=2000 fail_timeout=30s; } server { listen 53 udp; proxy_pass dns_upstreams; proxy_timeout 1s; proxy_responses 1; } } Even though this configuration works fine with 4 upstream servers, it doesn't help when I increase the number of servers. The Nginx server has enough memory and CPU capacity remaining when running with 4 upstream servers as well as 6 upstream servers. And the dnsperf client is not a bottleneck here because it can send much more load in a different setup. Also, the individual upstream server can serve a bit more than 5000 request per second. I am trying to get some hints about why I am observing more upstream failures and eventual unavailability when I add more servers. If anybody has faced a similar issue in the past and can give me some pointers to solve it, that would of great help. Thanks, Ajmal Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274257,274257#msg-274257 From thomas at glanzmann.de Tue May 16 08:50:24 2017 From: thomas at glanzmann.de (Thomas Glanzmann) Date: Tue, 16 May 2017 10:50:24 +0200 Subject: nginx ssl_verify_client on leads to segmentation fault In-Reply-To: <20170515162009.GE55433@mdounin.ru> References: <20170515061638.GD6302@glanzmann.de> <20170515162009.GE55433@mdounin.ru> Message-ID: <20170516085024.GE23807@glanzmann.de> Hello Maxim, > Could you please confirm you do _not_ have ssl_certificate defined > in the server block where you've added ssl_verify_client? I confirm the same, the ssl_certificate is defined in another server block. The fix works for me, thanks. Cheers, Thomas From mdounin at mdounin.ru Tue May 16 13:20:47 2017 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 16 May 2017 16:20:47 +0300 Subject: behavior of cache manager in version 1.10.3 In-Reply-To: <8e256d5e6466552b10d8e36fc7942976.NginxMailingListEnglish@forum.nginx.org> References: <20170515134021.GA55433@mdounin.ru> <8e256d5e6466552b10d8e36fc7942976.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20170516132047.GI55433@mdounin.ru> Hello! On Tue, May 16, 2017 at 12:45:50AM -0400, Michael Corn wrote: > One more question relating to cache cleaning. If I use version 1.11.5 or > greater, and > I set manager_sleep to a small number, say 50ms, And I set > use_temp_path=off. > Now, I start receiving a large file from the upstream, let's say it will > take 10 seconds to receive it. > Will the cache_manager see the cache exceeding its max_size before the > entire new file has been received and start > cleaning up older entries from the cache in parallel to the new file being > received? > Or will it not take the size of this new file into account until it has been > completely received? Temporary files are not taken into account by the cache manager process, regardless of the use_temp_path setting. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue May 16 16:47:57 2017 From: nginx-forum at forum.nginx.org (mkuehn) Date: Tue, 16 May 2017 12:47:57 -0400 Subject: Auto refresh for expired content? Message-ID: Hi Folks, i?m using Nginx as a proxy for my mobile app - which works pretty fine so far! My main cache has the following config: proxy_cache_path /var/cache/nginx/spieldaten levels=1:2 keys_zone=spieldaten:100m max_size=150m inactive=5m use_temp_path=off; proxy_cache_valid 200 302 5m; If a request is not cached by Nginx, it could take about 5 sec til it comes back from the defined proxy_pass - once it is cached its below 1 sec :) Now my question, is it possible that Nginx can "automaticly" get a fresh copy from the proxy_pass, when it recognized that the cached request is expired - so that none User has to wait about 5 sec to get fresh content and gets always fast data from the Nginx cache? Thanks in advance! Regrads, Maik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274264,274264#msg-274264 From francis at daoine.org Tue May 16 16:49:07 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 16 May 2017 17:49:07 +0100 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx In-Reply-To: References: Message-ID: <20170516164907.GE10157@daoine.org> On Mon, May 15, 2017 at 11:59:27AM +0200, J K via nginx wrote: Hi there, To recap: you had installed a "flask" web server and a "bokeh" web server. You put the "flask" one behind nginx, so that clients would talk to nginx and to bokeh. And the clients were happy to talk http to nginx and http to bokeh. Then you enabled https on nginx, so that clients would talk https to nginx and http to bokeh. And the clients did not want to talk http to bokeh after talking https to nginx. So right now, you are trying to put the "bokeh" web server behind nginx too. There are various ips and ports and url prefixes that appear in various configuration files; it is worth making sure that you are very clear on what each one is for. That will make it possible to see what needs to be done to put bokeh behind nginx. > > > 3. in the Flask app, I changed the URL > > > to:url='https://138.197.132.46:5006/bokeh/' > > You can't mix 'https://' and :5006 port in same url - this way the > > request > > goes to port 5006 but it expects to be also encrypted but if I understand > > correctly bokeh doesn't support SSL. > > What I forgot to add you need to change the 'url' (note the domain part) > > to: > > > > url='https://yourdomain/bokeh/' > > > > by looking at your error messages it seems that the 'url' is also directly > > used for client requests (probably placed in the html templated) - which > > means you can't use plain IP because then the browser most likely will just > > generate a SSL certificate and domain mismatch. > Thanks for answering again. > > I followed your advise and change the Flask app script so that I have one > URL to pull the Bokeh session and another one to create the HTML script: > > def company_abc(): > > url='http://127.0.0.1:5006/bokeh' So: what is that url for? Is it a thing that the client web browser will try to access, or a thing that something internal to flask will try to access, or a thing that something internal to bokeh will try to access? When you can say what it is, then it may become clear what value it should have. > session=pull_session(url=url,app_path="/company_abc") > > url_https='https://www.example.com' Same question. What is the purpose of that? Which of (browser, flask, bokeh) will try to use it? > > > proxy_pass http://127.0.0.1:5006; # you suggested > > 127.0. > > > *1*.1, but I figured that was a typo > > > > The proxy_pass address should be wherever your "bokeh" http server is > > actually listening. > > > > Which probably means that whatever you use up there... > > > > > command=/opt/envs/virtual/bin/bokeh serve company_abc.py company_xyz.py > > > geomorphix.py --prefix=/bokeh/ --allow-websocket-origin=www.example.com > > > --allow-websocket-origin=example.com --host=138.197.132.46:5006 > > > --use-xheaders > > > > you should also use up there as --host. > > > > I suspect that making them both be 127.0.0.1 will be the easiest > > way of reverse-proxying things; but I also suspect that the > > "--allow-websocket-origin" part suggests that you may want to configure > > nginx to reverse proxy the web socket connection too. Notes are at > > http://nginx.org/en/docs/http/websocket.html > > > > It will be helpful to have a very clear picture of what talks to what, > > when things are working normally; that should make it easier to be > > confident that the same links are in place with nginx in the mix. > As you suggested, I did the following: > > 1. in '/etc/supervisor/conf.d/bokeh_serve.conf' I changed the host to > 127.0.0.1: > > [program:bokeh_serve] > > command=/opt/envs/virtual/bin/bokeh serve company_abc.py --prefix=/bokeh/ > --allow-websocket-origin=www.example.com --allow-websocket-origin= > example.com --host=127.0.0.1:5006 > --use-xheaders What is "--allow-websocket-origin" for? Is it causing any breakage here? (Can you temporarily run with all websocket origins allowed, until things work; and then add back the restrictions to confirm that things still work?) > 2. I configure nginx to reverse proxy the web socket connection by adding > the following lines to each location block in '/etc/nginx/sites-available/ > default': That may or may not be needed in "each location". Maybe it is only needed in the "bokeh" location; the intended data flow diagram will show how things should be configured. > 3. In the Flask web app code I changed the URL of the route accordingly to > 127.0.0.1: > > @app.route("/company_abc/") > > @login_required > > @roles_accepted('company_abc', 'admin') > > def geomorphix(): > > url='http://127.0.0.1:5006/bokeh' Same question as above: is that something that flask uses, or something that the web browser uses? Because the web browser will fail to access http://127.0.0.1:5006/ > When I enter the website with the Bokeh script in my browser, I get a > connection refused error: > > GET http://127.0.0.1:5006/bokeh/example/autoload.js?bokeh-autoload-element=? > 9cf799610fb8&bokeh-session-id=8tvMFfJwtVFccTctGHIRPPsT3h6IF6nUFkJ8l6ZQALXl > net::ERR_CONNECTION_REFUSED That makes it look like the "url=" is something that the web browser uses. The web browser should only be accessing your https://nginx-server service, so urls that the web browser will use should refer to that. Possibly "url='/bokeh'" will Just Work for you. You mentioned the bokeh documentation at http://bokeh.pydata.org/en/latest/docs/user_guide/server.html#reverse-proxying-with-nginx-and-ssl and another link at http://stackoverflow.com/questions/38081389/bokeh-server-reverse-proxying-with-nginx-gives-404/38505205#38505205 in your first mail. Does your current nginx configuration resemble either of those? Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 16 16:49:32 2017 From: nginx-forum at forum.nginx.org (sdizazzo) Date: Tue, 16 May 2017 12:49:32 -0400 Subject: request_buffering gotchas? Message-ID: Hi! I'm new to nginx, and working on trying to stream an upload using a multipart-form through nginx to uwsgi and to my app. My client posts the request, and I expect nginx to begin forwarding it on to uwsgi as soon as data begins coming in...but...no matter what I do, uwsgi is not called until after the upload buffers in nginx. The entire upload completes, and \then\ the uwsgi call is made. I am sure the correct option is uwsgi_request_buffering off; ...but I'm wondering if there are any other requirements or options that need to be set, or gotchas that might be thwarting my attempt to make this work. It ain't working and I'm at my wit's end. Thanks in advance for any advice! ~Sean Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274265,274265#msg-274265 From nginx-forum at forum.nginx.org Tue May 16 16:57:53 2017 From: nginx-forum at forum.nginx.org (rcutter) Date: Tue, 16 May 2017 12:57:53 -0400 Subject: Occasional successful upstreamed requests that don't get picked up Message-ID: <6ba71ed363ef09b512a37b17b2146723.NginxMailingListEnglish@forum.nginx.org> Hello, I believe I have a tuning issue with NGINX - hoping someone can point me in the right direction! About 1 in 60,000 requests being proxied through Kong/NGINX are timing out. These requests are getting to the upstreamed host and are successfully logged in the load balancer in front of this upstreamed host. So either there's a network issue between that load balancer and NGINX or NGINX is simply not able/willing to process the response. Assuming this is an NGINX tuning issue, these are my settings (note these hosts have 32 cores). Traffic is not all that high, less than 10 req/sec per instance and requests are usually satisfied in less than a second: worker_processes auto; worker_priority -10; daemon off; pid pids/nginx.pid; error_log logs/error.log debug; worker_rlimit_nofile 20000; events { worker_connections 10000; multi_accept on; } Most all other config settings are "default" values. There's nothing in the Kong logs that indicate these dropped responses are being processed by Kong. There's no indication there aren't enough workers. These timeouts do not happen in clusters, they are more like singletons. Any advice on things I should look at or diagnosis possibilities? Thanks very much, Ryan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274267,274267#msg-274267 From francis at daoine.org Tue May 16 17:04:23 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 16 May 2017 18:04:23 +0100 Subject: request_buffering gotchas? In-Reply-To: References: Message-ID: <20170516170423.GF10157@daoine.org> On Tue, May 16, 2017 at 12:49:32PM -0400, sdizazzo wrote: Hi there, > My client posts the > request, and I expect nginx to begin forwarding it on to uwsgi as soon as > data begins coming in...but...no matter what I do, uwsgi is not called until > after the upload buffers in nginx. The entire upload completes, and \then\ > the uwsgi call is made. http://nginx.org/r/uwsgi_request_buffering includes the line: """ When HTTP/1.1 chunked transfer encoding is used to send the original request body, the request body will be buffered regardless of the directive value. """ Is that relevant in your case? Can you show a "curl" command which makes the request? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 16 17:18:41 2017 From: francis at daoine.org (Francis Daly) Date: Tue, 16 May 2017 18:18:41 +0100 Subject: Auto refresh for expired content? In-Reply-To: References: Message-ID: <20170516171841.GG10157@daoine.org> On Tue, May 16, 2017 at 12:47:57PM -0400, mkuehn wrote: Hi there, untested, but: > Now my question, is it possible that Nginx can "automaticly" get a fresh > copy from the proxy_pass, when it recognized that the cached request is > expired - so that none User has to wait about 5 sec to get fresh content and > gets always fast data from the Nginx cache? does http://nginx.org/r/proxy_cache_background_update sound like it can do what you want? You might be able to achieve most of what you want outside of nginx by running a "curl" command to fetch the content every few minutes; the hope being that the curl command will be the one that is waiting for 5 seconds and causing the local cache copy to be updated, while the "real" clients are fed from the cache. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 16 18:41:08 2017 From: nginx-forum at forum.nginx.org (sdizazzo) Date: Tue, 16 May 2017 14:41:08 -0400 Subject: request_buffering gotchas? In-Reply-To: <20170516170423.GF10157@daoine.org> References: <20170516170423.GF10157@daoine.org> Message-ID: <8d31b37cb37a9f9232a9b5c00cfda59c.NginxMailingListEnglish@forum.nginx.org> Thanks for your response, Francis! I don't have complete control over the client (coworker), but inspecting the header, that was in fact the case. I tried switching to using proxy_pass and proxy_request_buffering instead of the uwsgi module, which doesn't have quite as harsh of a limitation... """ When HTTP/1.1 chunked transfer encoding is used to send the original request body, the request body will be buffered regardless of the directive value unless HTTP/1.1 is enabled for proxying. """ I enabled proxy_http_version 1.1 as the docs suggest, but still no go. I'll keep whacking at it for a while before finding another route. Will post back if I come up with a working solution. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274265,274272#msg-274272 From peter_booth at me.com Tue May 16 21:55:54 2017 From: peter_booth at me.com (Peter Booth) Date: Tue, 16 May 2017 21:55:54 +0000 (GMT) Subject: =?utf-8?B?UmU6IE9jY2FzaW9uYWwgc3VjY2Vzc2Z1bCB1cHN0cmVhbWVkIHJlcXVlc3Rz?= =?utf-8?B?IHRoYXQgZG9uJ3QgZ2V0IHBpY2tlZCB1cA==?= Message-ID: Ryan, What is the topology of the system that you are describing? You mention kong/nginx, an upstream host, a load balancer, clients ... Are the load balancers hardware or software devices? Is kong nginx simply forwarding to a load-balancer VIP that fronts multiple upstream systems? Are there any firewalls or intrusion detection systems also in the mix? Are your clients remote? internet based? Are your nginx instances running on physical hosts, virtual machines, cloud based VMs? Are your nginx instances and your upstream system s in the same datacenter or some distance apart? Peter On May 16, 2017, at 12:57 PM, rcutter wrote: Hello, I believe I have a tuning issue with NGINX - hoping someone can point me in the right direction! About 1 in 60,000 requests being proxied through Kong/NGINX are timing out. These requests are getting to the upstreamed host and are successfully logged in the load balancer in front of this upstreamed host. So either there's a network issue between that load balancer and NGINX or NGINX is simply not able/willing to process the response. Assuming this is an NGINX tuning issue, these are my settings (note these hosts have 32 cores). Traffic is not all that high, less than 10 req/sec per instance and requests are usually satisfied in less than a second: worker_processes auto; worker_priority -10; daemon off; pid pids/nginx.pid; error_log logs/error.log debug; worker_rlimit_nofile 20000; events { worker_connections 10000; multi_accept on; } Most all other config settings are "default" values. There's nothing in the Kong logs that indicate these dropped responses are being processed by Kong. There's no indication there aren't enough workers. These timeouts do not happen in clusters, they are more like singletons. Any advice on things I should look at or diagnosis possibilities? Thanks very much, Ryan Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274267,274267#msg-274267 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 17 07:29:03 2017 From: nginx-forum at forum.nginx.org (mkuehn) Date: Wed, 17 May 2017 03:29:03 -0400 Subject: Auto refresh for expired content? In-Reply-To: <20170516171841.GG10157@daoine.org> References: <20170516171841.GG10157@daoine.org> Message-ID: Hi Francis, thanks a lot for your reply! I tried to use proxy_cache_background_update with the following config part. proxy_cache_path /var/cache/nginx/spieldaten levels=1:2 keys_zone=spieldaten:100m max_size=150m inactive=5d use_temp_path=off; ... ... ... proxy_cache test; proxy_cache_valid 200 302 1m; proxy_cache_valid 404 1m; proxy_cache_valid any 2m; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_background_update on; So my hope was with an inactive setup of 5d, even the requests which are not often called, will be keep i the cache and proxy_cache_background_update will do the magic - if a user request content, it will be fetch the "old" one from the cache and nginx will fetch a fresh copy in the background. But unfortunately the user gets always the old one from the cache, if proxy_cache_background_update is on - so something must be wrong with my config?!? I even thought about a cron job, but the problem is, i have thousands of different requests - so my hope was, that nginx can do the magic, wenn a content in the cache expired, there will be a "trigger" that nginx can fetch a new one in the background... Maybe you have an idea, why proxy_cache_background_update not working? Thanks! Maik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274264,274276#msg-274276 From cas.xyz at googlemail.com Wed May 17 10:34:17 2017 From: cas.xyz at googlemail.com (J K) Date: Wed, 17 May 2017 12:34:17 +0200 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx Message-ID: Hi Francis, Initially, without https, everything was running well. I was deploying a Bokeh server with the 'company_abc' app, and used a Flask app to render the website and handle login/redirecting/etc. Everything was behind a Nginx server. Then, I installed https the Bokeh app would not be rendered on the website. The Flask app worked fine, as before. I think what's happening is that Nginx sends a request to the Flask app. Flask then pulls the Bokeh session with session=pull_session (url=url,app_path="/company_abc") and creates a script tag with bokeh_script=autoload_server (None,app_path="/company_abc/",session_id=session.id,url=url_https) That's then embedded into the html page by return render_template("company_abc.html", bokeh_script=bokeh_script) The script tag looks like this: Now, when the browser opens the 'company_abc.html' page it sends a request to the Nginx server. This should then proxy to the Bokeh server. Does this sound correct? Now, I have done some changes and get a different error. The Bokeh app I'm starting now with bokeh serve company_abc.py --allow-websocket-origin=example.com --allow-websocket-origin=www.example.com --port=5006 \ --host="*" --use-xheaders So, it's running on localhost, port 5006. In the Flask app I redefined the route to the app as follows: @app.route("/company_abc/") def company_abc(): url='http://127.0.0.1:5006/' session=pull_session(url=url,app_path="/company_abc") url_https='https://example.com/' bokeh_script=autoload_server(None,app_path="/company_abc/",session_id= session.id,url=url_https) return render_template("company_abc.html", bokeh_script=bokeh_script) For the Nginx config file I followed the template from the Bokeh User Guide : location /company_abc/ { proxy_pass http://127.0.0.1:5006; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; proxy_http_version 1.1; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host:$server_port; proxy_buffering off; } } Now, with these setting I get the following errors in Chrome: GET https://example.com/static/css/bokeh.min.css?v= 7246afcfffc127faef7c138bce4742e9 example.com/:9 GET https://example.com/static/css/bokeh-widgets.min.css?v= d9cb9322d940f107727b091ff98d9c70 example.com/:12 GET https://example.com/static/js/bokeh-widgets.min.js?v= 1af1302b8bd7fcc88c7bcafb8771497b example.com/:11 GET https://example.com/static/js/bokeh.min.js?v= 9d3af13f493d36073a89714f6a5240c6 example.com/:12 GET https://example.com/static/js/bokeh-widgets.min.js?v= 1af1302b8bd7fcc88c7bcafb8771497b 404 (NOT FOUND) (index):14 Uncaught ReferenceError: Bokeh is not defined at (index):14 (anonymous) @ (index):14 (index):37 Uncaught ReferenceError: Bokeh is not defined at HTMLDocument.fn ((index):37) The log file for the Bokeh server shows no errors: 2017-05-17 10:17:49,640 Starting Bokeh server version 0.12.4 2017-05-17 10:17:49,641 Host wildcard '*' can expose the application to HTTP host header attacks. Host wildcard should only be used for testing purpose. 2017-05-17 10:17:49,647 Starting Bokeh server on port 5006 with applications at paths ['/geomorphix'] 2017-05-17 10:17:49,647 Starting Bokeh server with process id: 15851 2017-05-17 10:18:08,829 200 GET /geomorphix/ (37.201.192.96) 845.69ms Interestingly, when I manually execute the lines url='http://127.0.0.1:5006/' session=pull_session(url=url,app_path="/company_abc") url_https='https://example.com/' bokeh_script=autoload_server(None,app_path="/company_abc/",session_id= session.id,url=url_https) The Bokeh log file says that a WebSocket is opened and a ServerConnection is created: 2017-05-17 10:21:09,915 WebSocket connection opened 2017-05-17 10:21:10,769 ServerConnection created What does the new error mean? Why is a WebSocket opened and ServerConnection created only when I manually pull a session? Thanks! Message: 3 > Date: Tue, 16 May 2017 17:49:07 +0100 > From: Francis Daly > To: J K via nginx > Cc: J K > Subject: Re: Re :Re: Re:Reverse-proxying: Flask app with Bokeh server > on Nginx > Message-ID: <20170516164907.GE10157 at daoine.org> > Content-Type: text/plain; charset=utf-8 > > On Mon, May 15, 2017 at 11:59:27AM +0200, J K via nginx wrote: > > Hi there, > > To recap: > > you had installed a "flask" web server and a "bokeh" web server. You > put the "flask" one behind nginx, so that clients would talk to nginx > and to bokeh. > > And the clients were happy to talk http to nginx and http to bokeh. > > Then you enabled https on nginx, so that clients would talk https to > nginx and http to bokeh. > > And the clients did not want to talk http to bokeh after talking https > to nginx. > > So right now, you are trying to put the "bokeh" web server behind > nginx too. > > There are various ips and ports and url prefixes that appear in various > configuration files; it is worth making sure that you are very clear on > what each one is for. That will make it possible to see what needs to > be done to put bokeh behind nginx. > > > > > > 3. in the Flask app, I changed the URL > > > > to:url='https://138.197.132.46:5006/bokeh/' > > > > You can't mix 'https://' and :5006 port in same url - this way the > > > request > > > goes to port 5006 but it expects to be also encrypted but if I > understand > > > correctly bokeh doesn't support SSL. > > > > What I forgot to add you need to change the 'url' (note the domain > part) > > > to: > > > > > > url='https://yourdomain/bokeh/' > > > > > > by looking at your error messages it seems that the 'url' is also > directly > > > used for client requests (probably placed in the html templated) - > which > > > means you can't use plain IP because then the browser most likely will > just > > > generate a SSL certificate and domain mismatch. > > > Thanks for answering again. > > > > I followed your advise and change the Flask app script so that I have one > > URL to pull the Bokeh session and another one to create the HTML script: > > > > def company_abc(): > > > > url='http://127.0.0.1:5006/bokeh' > > So: what is that url for? > > Is it a thing that the client web browser will try to access, or a thing > that something internal to flask will try to access, or a thing that > something internal to bokeh will try to access? > > When you can say what it is, then it may become clear what value it > should have. > > > session=pull_session(url=url,app_path="/company_abc") > > > > url_https='https://www.example.com' > > Same question. What is the purpose of that? Which of (browser, flask, > bokeh) will try to use it? > > > > > proxy_pass http://127.0.0.1:5006; # you > suggested > > > 127.0. > > > > *1*.1, but I figured that was a typo > > > > > > The proxy_pass address should be wherever your "bokeh" http server is > > > actually listening. > > > > > > Which probably means that whatever you use up there... > > > > > > > command=/opt/envs/virtual/bin/bokeh serve company_abc.py > company_xyz.py > > > > geomorphix.py --prefix=/bokeh/ --allow-websocket-origin=www. > example.com > > > > --allow-websocket-origin=example.com --host=138.197.132.46:5006 > > > > --use-xheaders > > > > > > you should also use up there as --host. > > > > > > I suspect that making them both be 127.0.0.1 will be the easiest > > > way of reverse-proxying things; but I also suspect that the > > > "--allow-websocket-origin" part suggests that you may want to configure > > > nginx to reverse proxy the web socket connection too. Notes are at > > > http://nginx.org/en/docs/http/websocket.html > > > > > > It will be helpful to have a very clear picture of what talks to what, > > > when things are working normally; that should make it easier to be > > > confident that the same links are in place with nginx in the mix. > > > As you suggested, I did the following: > > > > 1. in '/etc/supervisor/conf.d/bokeh_serve.conf' I changed the host to > > 127.0.0.1: > > > > [program:bokeh_serve] > > > > command=/opt/envs/virtual/bin/bokeh serve company_abc.py > --prefix=/bokeh/ > > --allow-websocket-origin=www.example.com --allow-websocket-origin= > > example.com --host=127.0.0.1:5006 > > --use-xheaders > > What is "--allow-websocket-origin" for? Is it causing any breakage here? > > (Can you temporarily run with all websocket origins allowed, until > things work; and then add back the restrictions to confirm that things > still work?) > > > 2. I configure nginx to reverse proxy the web socket connection by adding > > the following lines to each location block in > '/etc/nginx/sites-available/ > > default': > > That may or may not be needed in "each location". Maybe it is only needed > in the "bokeh" location; the intended data flow diagram will show how > things should be configured. > > > 3. In the Flask web app code I changed the URL of the route accordingly > to > > 127.0.0.1: > > > > @app.route("/company_abc/") > > > > @login_required > > > > @roles_accepted('company_abc', 'admin') > > > > def geomorphix(): > > > > url='http://127.0.0.1:5006/bokeh' > > Same question as above: is that something that flask uses, or something > that the web browser uses? > > Because the web browser will fail to access http://127.0.0.1:5006/ > > > When I enter the website with the Bokeh script in my browser, I get a > > connection refused error: > > > > GET http://127.0.0.1:5006/bokeh/example/autoload.js?bokeh- > autoload-element=? > > 9cf799610fb8&bokeh-session-id=8tvMFfJwtVFccTctGHIRPPsT3h6IF6 > nUFkJ8l6ZQALXl > > net::ERR_CONNECTION_REFUSED > > That makes it look like the "url=" is something that the web browser uses. > > The web browser should only be accessing your https://nginx-server > service, so urls that the web browser will use should refer to that. > > Possibly "url='/bokeh'" will Just Work for you. > > You mentioned the bokeh documentation at > > http://bokeh.pydata.org/en/latest/docs/user_guide/server. > html#reverse-proxying-with-nginx-and-ssl > > and another link at > > http://stackoverflow.com/questions/38081389/bokeh- > server-reverse-proxying-with-nginx-gives-404/38505205#38505205 > > in your first mail. Does your current nginx configuration resemble either > of those? > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From oshaer at cisco.com Wed May 17 10:40:32 2017 From: oshaer at cisco.com (Ofira Shaer (oshaer)) Date: Wed, 17 May 2017 10:40:32 +0000 Subject: nginx binaries with auth_request module Message-ID: Hi Is there any binary linux version of nginx *with* the http_auth_request_module? The documentation says the source has to be compiled with a special flag, but it seems that the windows addition already has it inside. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Wed May 17 10:47:29 2017 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 17 May 2017 13:47:29 +0300 Subject: nginx binaries with auth_request module In-Reply-To: References: Message-ID: <2302492.fy3HRV7eY7@vbart-laptop> On Wednesday 17 May 2017 10:40:32 Ofira Shaer wrote: > Hi > > Is there any binary linux version of nginx *with* the http_auth_request_module? The documentation says the source has to be compiled with a special flag, but it seems that the windows addition already has it inside. > > Thanks. > http://nginx.org/en/linux_packages.html wbr, Valentin V. Bartenev From arut at nginx.com Wed May 17 11:12:56 2017 From: arut at nginx.com (Roman Arutyunyan) Date: Wed, 17 May 2017 14:12:56 +0300 Subject: Auto refresh for expired content? In-Reply-To: References: <20170516171841.GG10157@daoine.org> Message-ID: <20170517111256.GE48854@Romans-MacBook-Air.local> Hi, On Wed, May 17, 2017 at 03:29:03AM -0400, mkuehn wrote: > Hi Francis, > > thanks a lot for your reply! > > I tried to use proxy_cache_background_update with the following config > part. > > proxy_cache_path /var/cache/nginx/spieldaten levels=1:2 > keys_zone=spieldaten:100m max_size=150m inactive=5d use_temp_path=off; > ... ... ... > proxy_cache test; > proxy_cache_valid 200 302 1m; > proxy_cache_valid 404 1m; > proxy_cache_valid any 2m; > proxy_ignore_headers X-Accel-Expires Expires Cache-Control; > proxy_cache_use_stale error timeout updating http_500 http_502 > http_503 http_504; > proxy_cache_lock on; > proxy_cache_background_update on; > > So my hope was with an inactive setup of 5d, even the requests which are not > often called, will be keep i the cache and proxy_cache_background_update > will do the magic - if a user request content, it will be fetch the "old" > one from the cache and nginx will fetch a fresh copy in the background. > But unfortunately the user gets always the old one from the cache, if > proxy_cache_background_update is on - so something must be wrong with my > config?!? Can you check if the update request comes to your backend when user gets the old cached response? [..] -- Roman Arutyunyan From nginx-forum at forum.nginx.org Wed May 17 11:58:29 2017 From: nginx-forum at forum.nginx.org (fengx) Date: Wed, 17 May 2017 07:58:29 -0400 Subject: why reuseport don't increases throughoutput? Message-ID: <425918877a1d6a22d61f6204594c42a1.NginxMailingListEnglish@forum.nginx.org> Hello It shows the new feature reusport from v1.9.1 can increase the QPS by 2-3 times than accept_mutex on and off in this article https://www.nginx.com/blog/socket-sharding-nginx-release-1-9-1/. But the result is disappointed when we have the test in our production with V1.11.2.2. It don't even have the improvement but reduced, by 10%, dropped from 42K QPS(with accept_mutex off) to 38K QPS(with reuseport enabled). and it indeed reduce the latency. The two test cases have anything identicial except that the later have reuseport enabled. I wonder if I have missed some special configuration. Thanks. Xiaofeng Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274281,274281#msg-274281 From nginx-forum at forum.nginx.org Wed May 17 13:07:30 2017 From: nginx-forum at forum.nginx.org (mkuehn) Date: Wed, 17 May 2017 09:07:30 -0400 Subject: Auto refresh for expired content? In-Reply-To: <20170517111256.GE48854@Romans-MacBook-Air.local> References: <20170517111256.GE48854@Romans-MacBook-Air.local> Message-ID: Hi Roman, thanks for your reply - with proxy_cache_background_update is OFF, the correct testfile "test.js"is requested: "GET /test.js HTTP/1.0" 200 199503 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36" With proxy_cache_background_update is ON, the only "/" is requested: "GET / HTTP/1.0" 200 32287 "-" "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.96 Safari/537.36" Seems there is something wrong :/ Best, Maik Posted at Nginx Forum: https://forum.nginx.org/read.php?2,274264,274282#msg-274282 From francis at daoine.org Wed May 17 15:52:54 2017 From: francis at daoine.org (Francis Daly) Date: Wed, 17 May 2017 16:52:54 +0100 Subject: Auto refresh for expired content? In-Reply-To: References: <20170516171841.GG10157@daoine.org> Message-ID: <20170517155254.GH10157@daoine.org> On Wed, May 17, 2017 at 03:29:03AM -0400, mkuehn wrote: Hi there, > proxy_cache_path /var/cache/nginx/spieldaten levels=1:2 > keys_zone=spieldaten:100m max_size=150m inactive=5d use_temp_path=off; > ... ... ... > proxy_cache test; I'm pretty sure that nginx's internationalisation/localisation efforts don't allow it to recognise that "test" and "spieldaten" should be considered equivalent :-) That aside: the auto-refresh works for me. Using your proxy_cache settings in a test nginx.conf: == http { server { listen 8880; return 200 "Now it is $time_local\n"; access_log logs/upstream.log; } proxy_cache_path /tmp/testcache levels=1 keys_zone=test:10m max_size=10m inactive=5d use_temp_path=off; server { listen 8080; proxy_cache test; proxy_cache_valid 200 302 1m; proxy_cache_valid 404 1m; proxy_cache_valid any 2m; proxy_ignore_headers X-Accel-Expires Expires Cache-Control; proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504; proxy_cache_lock on; proxy_cache_background_update on; access_log logs/main.log; location / { proxy_pass http://127.0.0.1:8880; } } } == I can do $ while :; do date; curl http://127.0.0.1:8080/; sleep 7; done and the first request after the 1-minute expiry gives me the old content; and the next request gives me the new content that was fetched 7 seconds previously. (And I can compare the different log files, to see how much my caching saved the upstream port 8880 server from processing.) Which seems to be exactly what you want. Does that test work for you? If it does not, there is a problem to be resolved. If it does, then you can start to compare your own system with what is above, and spot the difference. > Maybe you have an idea, why proxy_cache_background_update not working? As above, it works for me. So - what is different in your system? Good luck with it, f -- Francis Daly francis at daoine.org From simowitz at google.com Wed May 17 21:05:51 2017 From: simowitz at google.com (Jonathan Simowitz) Date: Wed, 17 May 2017 17:05:51 -0400 Subject: Upstream block: backup with max_fails=0 does not appear to work as expected Message-ID: Hello, I have an upstream block with two servers as follows: upstream { server foo.com; server bar.com max_fails=0 backup; } My desired use case would be that the foo.com server is hit for all requests and can be marked as down by nginx if it starts serving errors. In this case nginx will fallback to hitting bar.com, however bar.com should not be allowed to be marked down by nginx. What is actually happening is the "max_fails=0" statement is essentially being ignored causing the error message "no live upstreams while connecting to upstream" in my logs. Is there a configuration here that obtains my desired use case? Thank you, ~Jonathan -- Jonathan Simowitz | Jigsaw | Software Engineer | simowitz at google.com | 631-223-8608 -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed May 17 23:14:14 2017 From: francis at daoine.org (Francis Daly) Date: Thu, 18 May 2017 00:14:14 +0100 Subject: Reverse-proxying: Flask app with Bokeh server on Nginx In-Reply-To: References: Message-ID: <20170517231414.GI10157@daoine.org> On Wed, May 17, 2017 at 12:34:17PM +0200, J K via nginx wrote: Hi there, > Initially, without https, everything was running well. Yes. > I was deploying a > Bokeh server with the 'company_abc' app, and used a Flask app to render the > website and handle login/redirecting/etc. Yes. > Everything was behind a Nginx server. No. Your bokeh server was not behind nginx. Your bokeh server always listened on port 5006. Your nginx config did not proxy_pass to port 5006. Your web clients connected directly to bokeh on port 5006, not going through nginx. If you go back to the beginning, and configure flask behind nginx according to the flask documentation (as you had done); and configure bokeh behind nginx according to the bokeh documentation that you showed, then you may find that it all Just Works. Right now, you have an inconsistent mix of configurations. What you want is that whenever the client browser talks to flask, it uses a url prefix like https://www.example.comw/flask/, and whenever the client browser talks to bokeh, it uses a url prefix like https://www.example.com/bokeh/. When flask talks to bokeh, it can use a url prefix like http://127.0.0.1:5006/bokeh/. Or it can use https://www.example.com/bokeh/. It is probably better if it uses the first one. > The script tag looks like this: > >