From mdounin at mdounin.ru Tue Jul 1 00:33:45 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 04:33:45 +0400 Subject: difficulty adding headers In-Reply-To: References: <20140630191556.GI1849@mdounin.ru> Message-ID: <20140701003345.GS1849@mdounin.ru> Hello! On Mon, Jun 30, 2014 at 04:34:51PM -0400, ura wrote: > ok, thanks. > > 1. i was thinking that this was the case, based on the results i have seen, > yes. > 2. ah, ok - i didn't appreciate that. i found this page with php code: > http://licson.net/post/stream-videos-php/ > > is there a standard / recommended way to approach this with nginx? Streaming videos with php is silly, so recommended approach is "Don't do that". > 3. i believe the problem i am encountering is that i am unable to explicitly > redirect file requests for a specific path (e.g. a path i create for use > with streaming videos, called 'stream') to a specific static address in the > server's filestorage, because every request is being handled via a php file > and there is a .php location block in the site config. so if i create a > location: > > location /stream/ { > internal; > root /var/www/data/; > } > > and then navigate to www.mysite.tld/stream/blah.mp4 First of all, try removing "internal", as it will prevent handling of all external requests. > what is occuring is that the stream block is never processed and the .php > block in the config is triggered instead.. i'm not 100% clear on why this is > occurring. > > here's a sample elgg nginx config file that is basically similar to mine: > https://gist.github.com/hellekin/755617 If you don't understand how requests are handled in your config, it's probably good idea to throw it away and start with something simple without regex locations, rewrites, if's and so on. > i have created a new thread in the forum here for this topic, since you have > really helpfully addressed my original question and the topic has changed. > (new thread: http://forum.nginx.org/read.php?2,251295) Just a note: it's not forum, it's a mailing list. -- Maxim Dounin http://nginx.org/ From schlie at comcast.net Tue Jul 1 00:44:07 2014 From: schlie at comcast.net (Paul Schlie) Date: Mon, 30 Jun 2014 20:44:07 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> Message-ID: <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> Is there any possible solution for this problem? As although proxy_cache_lock may inhibit the creation of multiple proxy_cache files, it has seemingly no effect on the creation of multiple proxy_temp files, being the true root of the problem which the description of proxy_cache_lock claims to solve (as all proxy_cache files are first proxy_temp files, so unless proxy_cache_lock can properly prevent the creation of multiple redundant proxy_temp file streams, it can seemingly not have the effect it claims to)? (Further, as temp_file's are used to commonly source all reverse proxy'd reads, regardless of whether they're using a cache hashed naming scheme for proxy_cache files, or a symbolic naming scheme for reverse proxy'd static files; it would be nice if the fix were applicable to both.) On Jun 24, 2014, at 10:58 PM, Paul Schlie wrote: > Hi, Upon further testing, it appears the problem exists even with proxy_cache'd files with "proxy_cache_lock on". > > (Please consider this a serious bug, which I'm surprised hasn't been detected before; verified on recently released 1.7.2) > > On Jun 24, 2014, at 8:58 PM, Paul Schlie wrote: > >> Again thank you. However ... (below) >> >> On Jun 24, 2014, at 8:30 PM, Maxim Dounin wrote: >> >>> Hello! >>> >>> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote: >>> >>>> Thank you; however it appears to have no effect on reverse proxy_store'd static files? >>> >>> Yes, it's part of the cache machinery. The proxy_store >>> functionality is dumb and just provides a way to store responses >>> received, nothing more. >> >> - There should be no difference between how reverse proxy'd files are accessed and first stored into corresponding temp_files (and below). >> >>> >>>> (Which seems odd, if it actually works for cached files; as both >>>> are first read into temp_files, being the root of the problem.) >>> >>> See above (and below). >>> >>>> Any idea on how to prevent multiple redundant streams and >>>> corresponding temp_files being created when reading/updating a >>>> reverse proxy'd static file from the backend? >>> >>> You may try to do so using limit_conn, and may be error_page and >>> limit_req to introduce some delay. But unlikely it will be a >>> good / maintainable / easy to write solution. >> >> - Please consider implementing by default that no more streams than may become necessary if a previously opened stream appears to have died (timed out), as otherwise only more bandwidth and thereby delay will most likely result to complete the request. Further as there should be no difference between how reverse proxy read-streams and corresponding temp_files are created, regardless of whether they may be subsequently stored as either symbolically-named static files, or hash-named cache files; this behavior should be common to both. >> >>>> (Out of curiosity, why would anyone ever want many multiple >>>> redundant streams/temp_files ever opened by default?) >>> >>> You never know if responses are going to be the same. The part >>> which knows (or, rather, tries to) is called "cache", and has >>> lots of directives to control it. >> >> - If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx. >> (unless a backend server is frequently dropping connections, it's counterproductive to open multiple redundant streams; as doing so by default will only likely result in higher-bandwidth and thereby slower response completion.) >> >>> -- >>> Maxim Dounin >>> http://nginx.org/ >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > From schlie at comcast.net Tue Jul 1 01:14:06 2014 From: schlie at comcast.net (Paul Schlie) Date: Mon, 30 Jun 2014 21:14:06 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> Message-ID: <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> (Seemingly, it may be beneficial to simply replace the sequentially numbered temp_file scheme with hash-named scheme, where if cached, the file is simply retained for some period of time and/or other condition, and which may be optionally symbolically aliased using their uri path and thereby respectively logically accessed as a local static file, or deleted upon no longer being needed and not being cached; and thereby kill multiple birds with one stone per-se?) On Jun 30, 2014, at 8:44 PM, Paul Schlie wrote: > Is there any possible solution for this problem? > > As although proxy_cache_lock may inhibit the creation of multiple proxy_cache files, it has seemingly no effect on the creation of multiple proxy_temp files, being the true root of the problem which the description of proxy_cache_lock claims to solve (as all proxy_cache files are first proxy_temp files, so unless proxy_cache_lock can properly prevent the creation of multiple redundant proxy_temp file streams, it can seemingly not have the effect it claims to)? > > (Further, as temp_file's are used to commonly source all reverse proxy'd reads, regardless of whether they're using a cache hashed naming scheme for proxy_cache files, or a symbolic naming scheme for reverse proxy'd static files; it would be nice if the fix were applicable to both.) > > > On Jun 24, 2014, at 10:58 PM, Paul Schlie wrote: > >> Hi, Upon further testing, it appears the problem exists even with proxy_cache'd files with "proxy_cache_lock on". >> >> (Please consider this a serious bug, which I'm surprised hasn't been detected before; verified on recently released 1.7.2) >> >> On Jun 24, 2014, at 8:58 PM, Paul Schlie wrote: >> >>> Again thank you. However ... (below) >>> >>> On Jun 24, 2014, at 8:30 PM, Maxim Dounin wrote: >>> >>>> Hello! >>>> >>>> On Tue, Jun 24, 2014 at 07:51:04PM -0400, Paul Schlie wrote: >>>> >>>>> Thank you; however it appears to have no effect on reverse proxy_store'd static files? >>>> >>>> Yes, it's part of the cache machinery. The proxy_store >>>> functionality is dumb and just provides a way to store responses >>>> received, nothing more. >>> >>> - There should be no difference between how reverse proxy'd files are accessed and first stored into corresponding temp_files (and below). >>> >>>> >>>>> (Which seems odd, if it actually works for cached files; as both >>>>> are first read into temp_files, being the root of the problem.) >>>> >>>> See above (and below). >>>> >>>>> Any idea on how to prevent multiple redundant streams and >>>>> corresponding temp_files being created when reading/updating a >>>>> reverse proxy'd static file from the backend? >>>> >>>> You may try to do so using limit_conn, and may be error_page and >>>> limit_req to introduce some delay. But unlikely it will be a >>>> good / maintainable / easy to write solution. >>> >>> - Please consider implementing by default that no more streams than may become necessary if a previously opened stream appears to have died (timed out), as otherwise only more bandwidth and thereby delay will most likely result to complete the request. Further as there should be no difference between how reverse proxy read-streams and corresponding temp_files are created, regardless of whether they may be subsequently stored as either symbolically-named static files, or hash-named cache files; this behavior should be common to both. >>> >>>>> (Out of curiosity, why would anyone ever want many multiple >>>>> redundant streams/temp_files ever opened by default?) >>>> >>>> You never know if responses are going to be the same. The part >>>> which knows (or, rather, tries to) is called "cache", and has >>>> lots of directives to control it. >>> >>> - If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx. >>> (unless a backend server is frequently dropping connections, it's counterproductive to open multiple redundant streams; as doing so by default will only likely result in higher-bandwidth and thereby slower response completion.) >>> >>>> -- >>>> Maxim Dounin >>>> http://nginx.org/ >>>> >>>> _______________________________________________ >>>> nginx mailing list >>>> nginx at nginx.org >>>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >> > From mdounin at mdounin.ru Tue Jul 1 01:32:35 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 05:32:35 +0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> Message-ID: <20140701013235.GX1849@mdounin.ru> Hello! On Mon, Jun 30, 2014 at 09:14:06PM -0400, Paul Schlie wrote: > (Seemingly, it may be beneficial to simply replace the > sequentially numbered temp_file scheme with hash-named scheme, > where if cached, the file is simply retained for some period of > time and/or other condition, and which may be optionally > symbolically aliased using their uri path and thereby > respectively logically accessed as a local static file, or > deleted upon no longer being needed and not being cached; and > thereby kill multiple birds with one stone per-se?) Sorry for not following your discussion with yourself, but it looks you didn't understand what was explained earlier: [...] > >>>>> (Out of curiosity, why would anyone ever want many multiple > >>>>> redundant streams/temp_files ever opened by default?) > >>>> > >>>> You never know if responses are going to be the same. The part > >>>> which knows (or, rather, tries to) is called "cache", and has > >>>> lots of directives to control it. > >>> > >>> - If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx. In http, responses are not guaranteed to be the same. Each response can be unique, and you can't assume responses have to be identical even if their URLs match. -- Maxim Dounin http://nginx.org/ From schlie at comcast.net Tue Jul 1 03:10:52 2014 From: schlie at comcast.net (Paul Schlie) Date: Mon, 30 Jun 2014 23:10:52 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <20140701013235.GX1849@mdounin.ru> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> Message-ID: <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> Regarding: > In http, responses are not guaranteed to be the same. Each > response can be unique, and you can't assume responses have to be > identical even if their URLs match. Yes, but potentially unique does not imply that upon the first valid ok or valid partial response that it will likely be productive to continue to open further such channels unless no longer responsive, as doing so will most likely be counter productive, only wasting limited resources by establishing redundant channels; being seemingly why proxy_cache_lock was introduced, as you initially suggested. On Jun 30, 2014, at 9:32 PM, Maxim Dounin wrote: > Hello! > > On Mon, Jun 30, 2014 at 09:14:06PM -0400, Paul Schlie wrote: > >> (Seemingly, it may be beneficial to simply replace the >> sequentially numbered temp_file scheme with hash-named scheme, >> where if cached, the file is simply retained for some period of >> time and/or other condition, and which may be optionally >> symbolically aliased using their uri path and thereby >> respectively logically accessed as a local static file, or >> deleted upon no longer being needed and not being cached; and >> thereby kill multiple birds with one stone per-se?) > > Sorry for not following your discussion with yourself, but it looks > you didn't understand what was explained earlier: > > [...] > >>>>>>> (Out of curiosity, why would anyone ever want many multiple >>>>>>> redundant streams/temp_files ever opened by default?) >>>>>> >>>>>> You never know if responses are going to be the same. The part >>>>>> which knows (or, rather, tries to) is called "cache", and has >>>>>> lots of directives to control it. >>>>> >>>>> - If they're not "the same" then the tcp protocol stack has failed, which is nothing to do with ngiinx. > > In http, responses are not guaranteed to be the same. Each > response can be unique, and you can't assume responses have to be > identical even if their URLs match. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jul 1 04:05:18 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Tue, 01 Jul 2014 00:05:18 -0400 Subject: Nginx Windows High Traffic issues In-Reply-To: <9a1a33b2fe1b1663c6cf2d8752e714b3.NginxMailingListEnglish@forum.nginx.org> References: <20140630175333.GF1849@mdounin.ru> <9a1a33b2fe1b1663c6cf2d8752e714b3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <815105fc16b4ee78087e80227f7346b8.NginxMailingListEnglish@forum.nginx.org> I actualy came accross a setting in my device manager called write cache buffer flushing. When you disable Write Cache Buffer Flushing, this allows application software to blaze ahead after writing data to disk without waiting for the physical write to complete. http://noel.prodigitalsoftware.com/temp/WriteCacheBufferFlushing.jpg I have enabled it rebooted my machine and will post in a few hours how things are going with it. Now this i won't be a soloution but will deffinetly help write allot more data simultaneously. What should help until i find a SSD/SAS to dedicate my money to. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251186,251347#msg-251347 From lucas at slcoding.com Tue Jul 1 04:57:47 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 06:57:47 +0200 Subject: proxy_pass_header not working in 1.6.0 Message-ID: <53B23FCB.50403@slcoding.com> Hi guys, I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4). Sadly I've found out, after upgrading proxy_pass_header seems to stop working, meaning no headers is passed from the upstream at all, I've tried setting caching headers, expires headers, removing ETag etc but nothing seems to go through. I then wanted to test it, on other machines, because it could be a faulty installation, but I can replicate it on 3 different machines, I'm always getting my releases from https://github.com/nginx/nginx/releases. My config looks as following: https://gist.github.com/lucasRolff/c4a359d93b5906678a23 Do you guys know what can be wrong, and if there is a fix for it in any newer version of nginx, or if I should downgrade to 1.4.4 again (Where I know it's working at least). Thanks in advance! Best regards, Lucas Rolff From nginx-forum at nginx.us Tue Jul 1 05:35:57 2014 From: nginx-forum at nginx.us (audvare) Date: Tue, 01 Jul 2014 01:35:57 -0400 Subject: dav and dav_ext, mp4 module, PROPFIND not working for files In-Reply-To: References: Message-ID: <98631735e84d56149c25e66bb20b29b0.NginxMailingListEnglish@forum.nginx.org> Roman Arutyunyan Wrote: ------------------------------------------------------- > > Currently nginx does not seem to be able to do what you want. If > you?re ready to patch > the source here?s the patch fixing the issue. > > diff -r 0dd77ef9f114 src/http/modules/ngx_http_mp4_module.c > --- a/src/http/modules/ngx_http_mp4_module.c Fri Jun 27 13:06:09 > 2014 +0400 > +++ b/src/http/modules/ngx_http_mp4_module.c Mon Jun 30 19:10:59 > 2014 +0400 > @@ -431,7 +431,7 @@ ngx_http_mp4_handler(ngx_http_request_t > ngx_http_core_loc_conf_t *clcf; > > if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > - return NGX_HTTP_NOT_ALLOWED; > + return NGX_DECLINED; > } > > if (r->uri.data[r->uri.len - 1] == '/') { Thanks. This works well. < HTTP/1.1 207 Multi-Status /video/avgn/t_screwattack_avgn_bugsbcc_901_gt.mp4 ... Is there any change this will make it into upstream so I don't have to keep on patching? Not that I mind that much because with Gentoo and user patches it is extremely easy but I guess I would of course be concerned that the code may change drastically such that the patch will stop working. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251279,251350#msg-251350 From nginx-forum at nginx.us Tue Jul 1 07:10:07 2014 From: nginx-forum at nginx.us (khav) Date: Tue, 01 Jul 2014 03:10:07 -0400 Subject: SSL slow on nginx In-Reply-To: <20140630224006.GJ1849@mdounin.ru> References: <20140630224006.GJ1849@mdounin.ru> Message-ID: <748a8ef6922a13fa57486297d729fdf2.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim and GreenGecko for the insights The worker process does match my number of cpu cores (running on 8 cores atm) How can i know the number of handshakes per seconds occurring on the server The openssl speed result have been posted on http://pastebin.com/hNeVhJfa for readability You can find my full list of ssl ciphers here ---> http://pastebin.com/7xJRJgJC If you can suggest "faster ciphers" with same level of compatibility , i would be awesome Will a faster cpu actually solve the issue ? My cpu load never reached a value > 0.50 as far as i know and average is like 0.30 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251277,251353#msg-251353 From contact at jpluscplusm.com Tue Jul 1 07:10:45 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 1 Jul 2014 08:10:45 +0100 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B23FCB.50403@slcoding.com> References: <53B23FCB.50403@slcoding.com> Message-ID: On 1 Jul 2014 07:58, "Lucas Rolff" wrote: > > Hi guys, > > I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4). > > Sadly I've found out, after upgrading proxy_pass_header seems to stop working, meaning no headers is passed from the upstream at all You need to read the proxy_pass_header and proxy_hide_header reference documentation. You're using it wrongly, possibly because you've assumed it takes generic parameters instead of very specific ones. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Tue Jul 1 07:19:56 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 09:19:56 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: References: <53B23FCB.50403@slcoding.com> Message-ID: <53B2611C.4040407@slcoding.com> Well, it used to work before 1.6.0.. For me http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header shows that I should do: proxy_pass_header Cache-Control; So that should be correct Best regards, Lucas Rolff Jonathan Matthews wrote: > > On 1 Jul 2014 07:58, "Lucas Rolff" > wrote: > > > > Hi guys, > > > > I'm currently running nginx version 1.6.0 (after upgrading from 1.4.4). > > > > Sadly I've found out, after upgrading proxy_pass_header seems to > stop working, meaning no headers is passed from the upstream at all > > You need to read the proxy_pass_header and proxy_hide_header reference > documentation. You're using it wrongly, possibly because you've > assumed it takes generic parameters instead of very specific ones. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Jul 1 07:28:37 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 1 Jul 2014 08:28:37 +0100 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B2611C.4040407@slcoding.com> References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> Message-ID: On 1 Jul 2014 10:20, "Lucas Rolff" wrote: > > Well, it used to work before 1.6.0.. > > For me http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header shows that I should do: > > proxy_pass_header Cache-Control; > > So that should be correct No. You have misread the documentation. proxy_pass_header accepts a very limited set of headers whereas your use of it assumes it is generic. Please carefully *re*read the _pass_ AND _hide_ documentation as I suggested. -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Tue Jul 1 07:33:57 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 09:33:57 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> Message-ID: <53B26465.1020907@slcoding.com> Do you have a link to a documentation that has info about this then? Because in the below link, and in http://wiki.nginx.org/HttpProxyModule#proxy_pass_header theres nothing about what it accepts. Best regards, Lucas Rolff Jonathan Matthews wrote: > > On 1 Jul 2014 10:20, "Lucas Rolff" > wrote: > > > > Well, it used to work before 1.6.0.. > > > > For me > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header > shows that I should do: > > > > proxy_pass_header Cache-Control; > > > > So that should be correct > > No. You have misread the documentation. > > proxy_pass_header accepts a very limited set of headers whereas your > use of it assumes it is generic. > > Please carefully *re*read the _pass_ AND _hide_ documentation as I > suggested. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Jul 1 07:54:02 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 1 Jul 2014 08:54:02 +0100 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B26465.1020907@slcoding.com> References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> <53B26465.1020907@slcoding.com> Message-ID: On 1 Jul 2014 10:34, "Lucas Rolff" wrote: > > Do you have a link to a documentation that has info about this then? Because in the below link, and in http://wiki.nginx.org/HttpProxyModule#proxy_pass_header theres nothing about what it accepts. How about the doc you already found, and then the link that it contains: >> On 1 Jul 2014 10:20, "Lucas Rolff" wrote: >> > For me http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Tue Jul 1 08:01:34 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 10:01:34 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> <53B26465.1020907@slcoding.com> Message-ID: <53B26ADE.9000901@slcoding.com> So.. Where is the thing that states I can't use proxy_pass_header cache-control, or expires? :))) Maybe I'm just stupid Best regards, Lucas Rolff Jonathan Matthews wrote: > > On 1 Jul 2014 10:34, "Lucas Rolff" > wrote: > > > > Do you have a link to a documentation that has info about this then? > Because in the below link, and in > http://wiki.nginx.org/HttpProxyModule#proxy_pass_header theres nothing > about what it accepts. > > How about the doc you already found, and then the link that it contains: > > >> On 1 Jul 2014 10:20, "Lucas Rolff" > wrote: > >> > For me > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass_header > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From contact at jpluscplusm.com Tue Jul 1 08:09:00 2014 From: contact at jpluscplusm.com (Jonathan Matthews) Date: Tue, 1 Jul 2014 09:09:00 +0100 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B26ADE.9000901@slcoding.com> References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> <53B26465.1020907@slcoding.com> <53B26ADE.9000901@slcoding.com> Message-ID: On 1 Jul 2014 11:01, "Lucas Rolff" wrote: > > So.. Where is the thing that states I can't use proxy_pass_header cache-control, or expires? :))) The proxy_hide_header and proxy_pass_header reference docs. -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Tue Jul 1 08:25:39 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 01 Jul 2014 01:25:39 -0700 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> <53B26465.1020907@slcoding.com> <53B26ADE.9000901@slcoding.com> Message-ID: <53B27083.4040903@fearnothingproductions.net> Can we move past passive aggressive posting to a public mailing list and actually try to accomplish something? The nginx docs indicate the following about proxy_pass_header "Permits passing otherwise disabled header fields from a proxied server to a client." 'otherwise disabled header fields' are documented as the following (from proxy_hide_header docs): By default, nginx does not pass the header fields ?Date?, ?Server?, ?X-Pad?, and ?X-Accel-...? from the response of a proxied server to a client. So I don't know why you would need to have proxy_pass_header Cache-Control in the first place, since this wouldn't seem to be dropped by default from the response of a proxied server to a client. Have you tried downgrading back to 1.4.4 to confirm whatever problem you're having doesn't exist within some other part of your infrastructure that was potentially changed as part of your upgrade? On 07/01/2014 01:09 AM, Jonathan Matthews wrote: > On 1 Jul 2014 11:01, "Lucas Rolff" > wrote: >> >> So.. Where is the thing that states I can't use proxy_pass_header > cache-control, or expires? :))) > > The proxy_hide_header and proxy_pass_header reference docs. > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From lucas at slcoding.com Tue Jul 1 08:30:47 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 10:30:47 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B27083.4040903@fearnothingproductions.net> References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> <53B26465.1020907@slcoding.com> <53B26ADE.9000901@slcoding.com> <53B27083.4040903@fearnothingproductions.net> Message-ID: <53B271B7.4070407@slcoding.com> I've verified that 1.4.4 works as it should, I receive the cache-control and expires headers sent from upstream (Apache 2.4 in this case), upgrading to nginx 1.6.0 breaks this, no config changes, nothing. But thanks for the explanation Robert! I'll try investigate it further to see if I can find the root cause, since for me this is very odd that it's suddenly not sent to the client anymore. Best regards, Lucas Rolff Robert Paprocki wrote: > Can we move past passive aggressive posting to a public mailing list and > actually try to accomplish something? > > The nginx docs indicate the following about proxy_pass_header > > "Permits passing otherwise disabled header fields from a proxied server > to a client." > > 'otherwise disabled header fields' are documented as the following (from > proxy_hide_header docs): > > By default, nginx does not pass the header fields ?Date?, ?Server?, > ?X-Pad?, and ?X-Accel-...? from the response of a proxied server to a > client. > > So I don't know why you would need to have proxy_pass_header > Cache-Control in the first place, since this wouldn't seem to be dropped > by default from the response of a proxied server to a client. > > Have you tried downgrading back to 1.4.4 to confirm whatever problem > you're having doesn't exist within some other part of your > infrastructure that was potentially changed as part of your upgrade? > > > On 07/01/2014 01:09 AM, Jonathan Matthews wrote: >> On 1 Jul 2014 11:01, "Lucas Rolff"> > wrote: >>> So.. Where is the thing that states I can't use proxy_pass_header >> cache-control, or expires? :))) >> >> The proxy_hide_header and proxy_pass_header reference docs. >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at slcoding.com Tue Jul 1 10:40:30 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 12:40:30 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B27083.4040903@fearnothingproductions.net> References: <53B23FCB.50403@slcoding.com> <53B2611C.4040407@slcoding.com> <53B26465.1020907@slcoding.com> <53B26ADE.9000901@slcoding.com> <53B27083.4040903@fearnothingproductions.net> Message-ID: <53B2901E.9040806@slcoding.com> I've been investigating, and seems like it's related to 1.6 or so - because 1.4.2 and 1.4.4 works perfectly with the config in the first email. Any that can possibly reproduce this as well? Best regards, Lucas R Robert Paprocki wrote: > Can we move past passive aggressive posting to a public mailing list and > actually try to accomplish something? > > The nginx docs indicate the following about proxy_pass_header > > "Permits passing otherwise disabled header fields from a proxied server > to a client." > > 'otherwise disabled header fields' are documented as the following (from > proxy_hide_header docs): > > By default, nginx does not pass the header fields ?Date?, ?Server?, > ?X-Pad?, and ?X-Accel-...? from the response of a proxied server to a > client. > > So I don't know why you would need to have proxy_pass_header > Cache-Control in the first place, since this wouldn't seem to be dropped > by default from the response of a proxied server to a client. > > Have you tried downgrading back to 1.4.4 to confirm whatever problem > you're having doesn't exist within some other part of your > infrastructure that was potentially changed as part of your upgrade? > > > On 07/01/2014 01:09 AM, Jonathan Matthews wrote: >> On 1 Jul 2014 11:01, "Lucas Rolff"> > wrote: >>> So.. Where is the thing that states I can't use proxy_pass_header >> cache-control, or expires? :))) >> >> The proxy_hide_header and proxy_pass_header reference docs. >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jul 1 10:40:59 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 01 Jul 2014 14:40:59 +0400 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B271B7.4070407@slcoding.com> References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> Message-ID: <10328024.X5V255Nk3V@vbart-workstation> On Tuesday 01 July 2014 10:30:47 Lucas Rolff wrote: > I've verified that 1.4.4 works as it should, I receive the cache-control > and expires headers sent from upstream (Apache 2.4 in this case), > upgrading to nginx 1.6.0 breaks this, no config changes, nothing. > > But thanks for the explanation Robert! > I'll try investigate it further to see if I can find the root cause, > since for me this is very odd that it's suddenly not sent to the client > anymore. > [..] They can be not sent because your backend stopped returning them for some reason. Try to investigate what happens on the wire between your backend and nginx. wbr, Valentin V. Bartenev From lucas at slcoding.com Tue Jul 1 11:00:05 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 13:00:05 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <10328024.X5V255Nk3V@vbart-workstation> References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> <10328024.X5V255Nk3V@vbart-workstation> Message-ID: <53B294B5.9060209@slcoding.com> nginx: curl -I http://domain.com/wp-content/uploads/2012/05/forside.png HTTP/1.1 200 OK Server: nginx Date: Tue, 01 Jul 2014 10:42:06 GMT Content-Type: image/png Content-Length: 87032 Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT Connection: keep-alive Vary: Accept-Encoding ETag: "51399b28-153f8" Accept-Ranges: bytes Backend: curl -I http://domain.com:8081/wp-content/uploads/2012/05/forside.png HTTP/1.1 200 OK Date: Tue, 01 Jul 2014 10:42:30 GMT Server: Apache Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT Accept-Ranges: bytes Content-Length: 87032 Cache-Control: max-age=2592000 Expires: Thu, 31 Jul 2014 10:42:30 GMT Content-Type: image/png So backend returns the headers just fine. Best regards, Lucas Rolff Valentin V. Bartenev wrote: > On Tuesday 01 July 2014 10:30:47 Lucas Rolff wrote: >> I've verified that 1.4.4 works as it should, I receive the cache-control >> and expires headers sent from upstream (Apache 2.4 in this case), >> upgrading to nginx 1.6.0 breaks this, no config changes, nothing. >> >> But thanks for the explanation Robert! >> I'll try investigate it further to see if I can find the root cause, >> since for me this is very odd that it's suddenly not sent to the client >> anymore. >> > [..] > > They can be not sent because your backend stopped returning them for some > reason. Try to investigate what happens on the wire between your backend > and nginx. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 1 11:01:59 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 15:01:59 +0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> Message-ID: <20140701110159.GB1849@mdounin.ru> Hello! On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote: > Regarding: > > > In http, responses are not guaranteed to be the same. Each > > response can be unique, and you can't assume responses have to be > > identical even if their URLs match. > > Yes, but potentially unique does not imply that upon the first valid ok or valid > partial response that it will likely be productive to continue to open further such > channels unless no longer responsive, as doing so will most likely be counter > productive, only wasting limited resources by establishing redundant channels; > being seemingly why proxy_cache_lock was introduced, as you initially suggested. Again: responses are not guaranteed to be the same, and unless you are using cache (and hence proxy_cache_key and various header checks to ensure responses are at least interchangeable), the only thing you can do is to proxy requests one by one. If you are using cache, then there is proxy_cache_key to identify a resource requested, and proxy_cache_lock to prevent multiple parallel requests to populate the same cache node (and "proxy_cache_use_stale updating" to prevent multiple requests when updating a cache node). In theory, cache code can be improved (compared to what we currently have) to introduce sending of a response being loaded into a cache to multiple clients. I.e., stop waiting for a cache lock once we've got the response headers, and stream the response body being load to all clients waited for it. This should/can help when loading large files into a cache, when waiting with proxy_cache_lock for a complete response isn't cheap. In practice, introducing such a code isn't cheap either, and it's not about using other names for temporary files. -- Maxim Dounin http://nginx.org/ From rpaprocki at fearnothingproductions.net Tue Jul 1 11:02:47 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Tue, 01 Jul 2014 04:02:47 -0700 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B294B5.9060209@slcoding.com> References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> <10328024.X5V255Nk3V@vbart-workstation> <53B294B5.9060209@slcoding.com> Message-ID: <53B29557.1020001@fearnothingproductions.net> You need to examine traffic over the wire between the proxy and the origin as you send a request from an outside client to the proxy. This will allow you to see if the origin is even returning the expected headers to the proxy, or if the proxy is seeing a different response than a direct client is. On 07/01/2014 04:00 AM, Lucas Rolff wrote: > > So backend returns the headers just fine. From lucas at slcoding.com Tue Jul 1 11:31:45 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 1 Jul 2014 13:31:45 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B29557.1020001@fearnothingproductions.net> References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> <10328024.X5V255Nk3V@vbart-workstation> <53B294B5.9060209@slcoding.com> <53B29557.1020001@fearnothingproductions.net> Message-ID: Seems like its not possible to ude try_files together with proxy_pass_header So if that was a bug before that you could get the headers from Backend but still serve the file using nginx I don't know. All dynamic files which I send to Backend is having cache-control headers set. All static files (which I want to serve using nginx, but inherit the cache-control header from Backend) doesn't work, which it used to do. - lucas R Robert Paprocki wrote: You need to examine traffic over the wire between the proxy and the origin as you send a request from an outside client to the proxy. This will allow you to see if the origin is even returning the expected headers to the proxy, or if the proxy is seeing a different response than a direct client is. On 07/01/2014 04:00 AM, Lucas Rolff wrote: So backend returns the headers just fine. _______________________________________________ nginx mailing listnginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Tue Jul 1 11:57:21 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 1 Jul 2014 16:57:21 +0500 Subject: proxy_cache not serving file from edge server !! Message-ID: We've an origin and edge server with nginx-1.6 . Origin web-server(Located in U.S) is configured with nginx_geo_module and edge(Local ISP) is configured with proxy_cache in order to cache files from origin server and serve from their lately. We're using following method for caching with proxy_cache :- 1. client (1.1.1.1) sends mp4 request to origin webserver and geo_module in origin checks, if the ip is 1.1.1.1 then pass that client to the edge server using proxy_pass. 2. Edge, checks if the file is in proxy_cache than it should serve the file locally and if file is not in proxy_cache, it'll pass back the request to origin server and client will be served from origin server as well as requested file will also be cached in local server, so next time the edge will not have to pass request again to origin server and serve the same file via locally. But, looks like our caching is not working as expected. Our ISP is complaining that, whenever edge server serves the file, instead of serving that file to local client (1.1.1.1) it serves the file back to origin server(U.S) and all outgoing bandwidth is going back to U.S instead of local clients (Offcourse bandwidth not being saved). So i want to ask, if the origin server is passing request to edge server, the cached file must be served locally but the request going back to the origin server even the cache status: HIT. Following are my configs :- ORIGIN :- geo $TW { default 0; 1.1.1.1 1; } server { listen 80; server_name origin.files.com origin.gear.net origin.gear.com; location / { root /var/www/html/files; index index.html index.htm index.php; } location ~ \.(mp4|jpg)$ { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; if ($TW) { proxy_pass http://tw002.edge.com:80; } mp4; root /var/www/html/files; expires 7d; valid_referers none blocked video.pk *.video.pk blog.video.pk *. facebook.com *.twitter.com *.files.com *.gear.net video.tv *.video.tv videomedia.tv www.videomedia.tv embed.videomedia.tv; if ($invalid_referer) { return 403; } } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 location ~ \.php$ { root /var/www/html/files; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location ~ /\.ht { deny all; } } EDGE :- #proxy_ignore_headers "Set-Cookie"; proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m loader_threshold=200 loader_files=500 inactive=1d max_size=62g; server { listen 80; server_name tw002.edge.com; root /var/www/html/files; location ~ \.(mp4|jpeg|jpg)$ { root /var/www/html/files; mp4; try_files $uri @getfrom_origin; } location @getfrom_origin { proxy_pass http://origin.files.com:80; # proxy_cache_valid 200 302 60m; proxy_cache_valid 15d; proxy_cache static; proxy_cache_min_uses 1; } } Help will be highly appreciated. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Tue Jul 1 12:07:34 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Tue, 1 Jul 2014 17:07:34 +0500 Subject: proxy_cache not serving file from edge server !! In-Reply-To: References: Message-ID: Our caching method is :- client ----> origin ---> edge. On Tue, Jul 1, 2014 at 4:57 PM, shahzaib shahzaib wrote: > We've an origin and edge server with nginx-1.6 . Origin web-server(Located > in U.S) is configured with nginx_geo_module and edge(Local ISP) is > configured with proxy_cache in order to cache files from origin server and > serve from their lately. We're using following method for caching with > proxy_cache :- > > 1. client (1.1.1.1) sends mp4 request to origin webserver and geo_module > in origin checks, if the ip is 1.1.1.1 then pass that client to the edge > server using proxy_pass. > > 2. Edge, checks if the file is in proxy_cache than it should serve the > file locally and if file is not in proxy_cache, it'll pass back the request > to origin server and client will be served from origin server as well as > requested file will also be cached in local server, so next time the edge > will not have to pass request again to origin server and serve the same > file via locally. > > But, looks like our caching is not working as expected. Our ISP is > complaining that, whenever edge server serves the file, instead of serving > that file to local client (1.1.1.1) it serves the file back to origin > server(U.S) and all outgoing bandwidth is going back to U.S instead of > local clients (Offcourse bandwidth not being saved). > > So i want to ask, if the origin server is passing request to edge server, > the cached file must be served locally but the request going back to the > origin server even the cache status: HIT. Following are my configs :- > > ORIGIN :- > > geo $TW { > default 0; > 1.1.1.1 1; > > } > > > > server { > listen 80; > server_name origin.files.com origin.gear.net origin.gear.com; > location / { > root /var/www/html/files; > index index.html index.htm index.php; > > } > > > location ~ \.(mp4|jpg)$ { > > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For > $proxy_add_x_forwarded_for; > proxy_set_header Host $http_host; > if ($TW) { > proxy_pass http://tw002.edge.com:80; > } > mp4; > root /var/www/html/files; > > > expires 7d; > valid_referers none blocked video.pk *.video.pk blog.video.pk *. > facebook.com *.twitter.com *.files.com *.gear.net video.tv *.video.tv > videomedia.tv www.videomedia.tv embed.videomedia.tv; > if ($invalid_referer) { > return 403; > } > } > > # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 > location ~ \.php$ { > root /var/www/html/files; > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > > location ~ /\.ht { > deny all; > } > } > > EDGE :- > > #proxy_ignore_headers "Set-Cookie"; > proxy_cache_path /data/nginx/cache levels=1:2 keys_zone=static:100m > loader_threshold=200 loader_files=500 inactive=1d > max_size=62g; > > > server { > > listen 80; > server_name tw002.edge.com; > root /var/www/html/files; > location ~ \.(mp4|jpeg|jpg)$ { > root /var/www/html/files; > mp4; > try_files $uri @getfrom_origin; > > } > > > location @getfrom_origin { > proxy_pass http://origin.files.com:80; > # proxy_cache_valid 200 302 60m; > proxy_cache_valid 15d; > proxy_cache static; > proxy_cache_min_uses 1; > } > > > > } > > Help will be highly appreciated. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 1 12:16:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 16:16:55 +0400 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <53B294B5.9060209@slcoding.com> References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> <10328024.X5V255Nk3V@vbart-workstation> <53B294B5.9060209@slcoding.com> Message-ID: <20140701121655.GE1849@mdounin.ru> Hello! On Tue, Jul 01, 2014 at 01:00:05PM +0200, Lucas Rolff wrote: > nginx: > > curl -I http://domain.com/wp-content/uploads/2012/05/forside.png > HTTP/1.1 200 OK > Server: nginx > Date: Tue, 01 Jul 2014 10:42:06 GMT > Content-Type: image/png > Content-Length: 87032 > Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT > Connection: keep-alive > Vary: Accept-Encoding > ETag: "51399b28-153f8" > Accept-Ranges: bytes > > Backend: > > curl -I http://domain.com:8081/wp-content/uploads/2012/05/forside.png > HTTP/1.1 200 OK > Date: Tue, 01 Jul 2014 10:42:30 GMT > Server: Apache > Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT > Accept-Ranges: bytes > Content-Length: 87032 > Cache-Control: max-age=2592000 > Expires: Thu, 31 Jul 2014 10:42:30 GMT > Content-Type: image/png > > So backend returns the headers just fine. The response returned by nginx is a static file served by nginx itself. Note the ETag header returned, and the "location ~*.*\.(3gp|gif|jpg|jpeg|png|..." in your config - it looks like the file exists on the filesystem, and returned directly as per configuration. There is no surprise the response doesn't have any headers which are normally returned by your backend. (And yes, all proxy_pass_header directives in your config are meaningless and should be removed.) -- Maxim Dounin http://nginx.org/ From lucas at slcoding.com Tue Jul 1 12:33:54 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 1 Jul 2014 14:33:54 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <20140701121655.GE1849@mdounin.ru> References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> <10328024.X5V255Nk3V@vbart-workstation> <53B294B5.9060209@slcoding.com> <20140701121655.GE1849@mdounin.ru> Message-ID: Hmm, okay.. Then I'll go back to an old buggy version of nginx which gives me the possibility to use the headers from Backend! Best regards, Lucas Rolff On Tuesday, July 1, 2014, Maxim Dounin wrote: > Hello! > > On Tue, Jul 01, 2014 at 01:00:05PM +0200, Lucas Rolff wrote: > > > nginx: > > > > curl -I http://domain.com/wp-content/uploads/2012/05/forside.png > > HTTP/1.1 200 OK > > Server: nginx > > Date: Tue, 01 Jul 2014 10:42:06 GMT > > Content-Type: image/png > > Content-Length: 87032 > > Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT > > Connection: keep-alive > > Vary: Accept-Encoding > > ETag: "51399b28-153f8" > > Accept-Ranges: bytes > > > > Backend: > > > > curl -I http://domain.com:8081/wp-content/uploads/2012/05/forside.png > > HTTP/1.1 200 OK > > Date: Tue, 01 Jul 2014 10:42:30 GMT > > Server: Apache > > Last-Modified: Fri, 08 Mar 2013 08:02:48 GMT > > Accept-Ranges: bytes > > Content-Length: 87032 > > Cache-Control: max-age=2592000 > > Expires: Thu, 31 Jul 2014 10:42:30 GMT > > Content-Type: image/png > > > > So backend returns the headers just fine. > > The response returned by nginx is a static file served by nginx > itself. Note the ETag header returned, and the "location > ~*.*\.(3gp|gif|jpg|jpeg|png|..." in your config - it looks like > the file exists on the filesystem, and returned directly as per > configuration. There is no surprise the response doesn't have any > headers which are normally returned by your backend. > > (And yes, all proxy_pass_header directives in your config are > meaningless and should be removed.) > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue Jul 1 12:38:50 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 01 Jul 2014 16:38:50 +0400 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: References: <53B23FCB.50403@slcoding.com> <20140701121655.GE1849@mdounin.ru> Message-ID: <6299885.7sneN0OoDe@vbart-workstation> On Tuesday 01 July 2014 14:33:54 Lucas Rolff wrote: > Hmm, okay.. > > Then I'll go back to an old buggy version of nginx which gives me the > possibility to use the headers from Backend! > [..] It doesn't do this either. Probably, it just has different configuration or permissions which results to that try_files always fails, and all requests are served from your backend. wbr, Valentin V. Bartenev From schlie at comcast.net Tue Jul 1 12:44:47 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 1 Jul 2014 08:44:47 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <20140701110159.GB1849@mdounin.ru> References: <3F28F29E-B638-4A85-9FA2-1CCFF0F61C79@comcast.net> <20140624223601.GS1849@mdounin.ru> <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> <20140701110159.GB1849@mdounin.ru> Message-ID: <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> As it appears a downstream response is not cached until first completely read into a temp_file (which for a large file may require 100's if not 1,000's of MB be transferred), there appears to be no "cache node formed" which to "lock" or serve "stale" responses from, and thereby until the first "cache node" is useably created, proxy_cache_lock has nothing to lock requests to? The code does not appear to be forming a "cache node" using the designated cache_key until the requested downstream element has completed transfer as you've noted? For the scheme to work, a lockable cache_node would need to formed immediately upon the first unique cache_key request, and not wait until the transfer of the requested item being stored into a temp_file is complete; as otherwise multiple redundant active streams between nginx and a backend server may be formed, each most likely transferring the same information needlessly; being what proxy_cache_lock was seemingly introduced to prevent (but it doesn't)? On Jul 1, 2014, at 7:01 AM, Maxim Dounin wrote: > Hello! > > On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote: > >> Regarding: >> >>> In http, responses are not guaranteed to be the same. Each >>> response can be unique, and you can't assume responses have to be >>> identical even if their URLs match. >> >> Yes, but potentially unique does not imply that upon the first valid ok or valid >> partial response that it will likely be productive to continue to open further such >> channels unless no longer responsive, as doing so will most likely be counter >> productive, only wasting limited resources by establishing redundant channels; >> being seemingly why proxy_cache_lock was introduced, as you initially suggested. > > Again: responses are not guaranteed to be the same, and unless > you are using cache (and hence proxy_cache_key and various header > checks to ensure responses are at least interchangeable), the only > thing you can do is to proxy requests one by one. > > If you are using cache, then there is proxy_cache_key to identify > a resource requested, and proxy_cache_lock to prevent multiple > parallel requests to populate the same cache node (and > "proxy_cache_use_stale updating" to prevent multiple requests when > updating a cache node). > > In theory, cache code can be improved (compared to what we > currently have) to introduce sending of a response being loaded > into a cache to multiple clients. I.e., stop waiting for a cache > lock once we've got the response headers, and stream the response > body being load to all clients waited for it. This should/can > help when loading large files into a cache, when waiting with > proxy_cache_lock for a complete response isn't cheap. In > practice, introducing such a code isn't cheap either, and it's not > about using other names for temporary files. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jul 1 12:48:39 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 01 Jul 2014 08:48:39 -0400 Subject: proxy_cache not serving file from edge server !! In-Reply-To: References: Message-ID: shahzaib1232 Wrote: ------------------------------------------------------- > Our caching method is :- > > client ----> origin ---> edge. > This is not going to work as expected, you need client ----> edge ---> origin Where edge proxy-passes to origin when file is not in cache. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251374,251384#msg-251384 From lucas at slcoding.com Tue Jul 1 12:50:41 2014 From: lucas at slcoding.com (Lucas Rolff) Date: Tue, 01 Jul 2014 14:50:41 +0200 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: <6299885.7sneN0OoDe@vbart-workstation> References: <53B23FCB.50403@slcoding.com> <20140701121655.GE1849@mdounin.ru> <6299885.7sneN0OoDe@vbart-workstation> Message-ID: <53B2AEA1.1070702@slcoding.com> But if files was served from backend I would assume to see the $upstream_response_time variable in nginx would return other stuff than a dash in 1.4.4 Like this, using logformat: "$request"$status$body_bytes_sent"$http_referer""$http_user_agent"$request_time$upstream_response_time'; "GET /css/colors.css HTTP/1.1" 304 0 "http://viewabove.dk/?page_id=2" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/34.0.1847.131 Safari/537.36" 0.000 - Again, configs is exactly the same, same operating system, same permissions, same site, so it seems odd to me, specially because nothing has been listed in the change logs about this 'fix' - it was in earlier versions, and was actually served by nginx, even when it did fetch headers from the backend. Best regards, Lucas Rolff Valentin V. Bartenev wrote: > On Tuesday 01 July 2014 14:33:54 Lucas Rolff wrote: >> Hmm, okay.. >> >> Then I'll go back to an old buggy version of nginx which gives me the >> possibility to use the headers from Backend! >> > [..] > > It doesn't do this either. Probably, it just has different configuration or > permissions which results to that try_files always fails, and all requests are > served from your backend. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 1 12:50:57 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 16:50:57 +0400 Subject: proxy_pass_header not working in 1.6.0 In-Reply-To: References: <53B23FCB.50403@slcoding.com> <53B27083.4040903@fearnothingproductions.net> <53B271B7.4070407@slcoding.com> <10328024.X5V255Nk3V@vbart-workstation> <53B294B5.9060209@slcoding.com> <20140701121655.GE1849@mdounin.ru> Message-ID: <20140701125057.GG1849@mdounin.ru> Hello! On Tue, Jul 01, 2014 at 02:33:54PM +0200, Lucas Rolff wrote: > Hmm, okay.. > > Then I'll go back to an old buggy version of nginx which gives me the > possibility to use the headers from Backend! You don't need to go back (and I doubt it will help) - if you don't want nginx to serve files directly, just don't configure it to do so. Just commenting out the location in question will do the trick. It may be also a good idea to re-read the configuration you are using to make sure you understand what it does. It looks like most, if not all, your question are results of misunderstanding of what's written in your nginx.conf. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jul 1 13:20:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 17:20:04 +0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> References: <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> <20140701110159.GB1849@mdounin.ru> <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> Message-ID: <20140701132004.GI1849@mdounin.ru> Hello! On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote: > As it appears a downstream response is not cached until first > completely read into a temp_file (which for a large file may > require 100's if not 1,000's of MB be transferred), there > appears to be no "cache node formed" which to "lock" or serve > "stale" responses from, and thereby until the first "cache node" > is useably created, proxy_cache_lock has nothing to lock > requests to? > > The code does not appear to be forming a "cache node" using the > designated cache_key until the requested downstream element has > completed transfer as you've noted? Your reading of the code is incorrect. A node in shared memory is created on a request start, and this is enough for proxy_cache_lock to work. On the request completion, the temporary file is placed into the cache directory, and the node is updated to reflect that the cache file exists and can be used. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jul 1 13:50:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 17:50:26 +0400 Subject: SSL slow on nginx In-Reply-To: <748a8ef6922a13fa57486297d729fdf2.NginxMailingListEnglish@forum.nginx.org> References: <20140630224006.GJ1849@mdounin.ru> <748a8ef6922a13fa57486297d729fdf2.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140701135026.GK1849@mdounin.ru> Hello! On Tue, Jul 01, 2014 at 03:10:07AM -0400, khav wrote: > Thanks Maxim and GreenGecko for the insights > > > The worker process does match my number of cpu cores (running on 8 cores > atm) Good. It may be also good idea to make sure you don't have multi_accept enabled, just in case. > How can i know the number of handshakes per seconds occurring on the > server First of all, count the number of connections per second (and requests per second) - it should be trivial, and may be extracted even with nginx stub_status module. I would generally recommend using logs though. With logs, you should be also able to count number of uncached handshakes - by using $ssl_session_reused variable and the $connection_requests one. See here: http://nginx.org/r/$ssl_session_reused http://nginx.org/r/$connection_requests http://nginx.org/r/log_format > The openssl speed result have been posted on http://pastebin.com/hNeVhJfa > for readability So, basically, your server is able to do about 800 plain RSA handshakes per second per core, 6400 handshakes total. But as previously noted, things can be very much worse with DH ciphers, especially if you are using 2048 bit dhparams (or larger). > If you can suggest "faster ciphers" with same level of compatibility , i > would be awesome It may be good idea to disable DH regardless of the level of compatibility. It's just too slow. > Will a faster cpu actually solve the issue ? > My cpu load never reached a value > 0.50 as far as i know and average is > like 0.30 You mean - 50% CPU usage across all CPUs? That's looks high enough, though not critical. But it may be a good idea to look into per-CPU stats, as well as per process CPU usage. Note well, CPU is a bottleneck I assumed based on few external tests. It may not be a CPU, but, e.g., a packet loss somewhere. And, as I already said, numbers shown by Pingdom are close to theoretical minimum, and I don't think there is much room for improvement. The one extra RTT probably deserves investigation, but I can't say it's an "issue" - it might be even legitimate. -- Maxim Dounin http://nginx.org/ From schlie at comcast.net Tue Jul 1 14:15:47 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 1 Jul 2014 10:15:47 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <20140701132004.GI1849@mdounin.ru> References: <4497FBEF-4DF5-43BD-A416-68C73E14E6C8@comcast.net> <20140625003002.GW1849@mdounin.ru> <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> <20140701110159.GB1849@mdounin.ru> <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> <20140701132004.GI1849@mdounin.ru> Message-ID: <632BE209-F6C5-4DA6-A078-C17878ED784B@comcast.net> Then how could multiple streams and corresponding temp_files ever be created upon successive requests for the same $uri with "proxy_cache_key $uri" and "proxy_cache_lock on"; if all subsequent requests are locked to the same cache_node created by the first request even prior to its completion? You've previously noted: > In theory, cache code can be improved (compared to what we > currently have) to introduce sending of a response being loaded > into a cache to multiple clients. I.e., stop waiting for a cache > lock once we've got the response headers, and stream the response > body being load to all clients waited for it. This should/can > help when loading large files into a cache, when waiting with > proxy_cache_lock for a complete response isn't cheap. In > practice, introducing such a code isn't cheap either, and it's not > about using other names for temporary files. Being what I apparently incorrectly understood proxy_cache_lock to actually do. So if not the above, what does proxy_cache_lock actually do upon receipt of subsequent requests for the same $uri? On Jul 1, 2014, at 9:20 AM, Maxim Dounin wrote: > Hello! > > On Tue, Jul 01, 2014 at 08:44:47AM -0400, Paul Schlie wrote: > >> As it appears a downstream response is not cached until first >> completely read into a temp_file (which for a large file may >> require 100's if not 1,000's of MB be transferred), there >> appears to be no "cache node formed" which to "lock" or serve >> "stale" responses from, and thereby until the first "cache node" >> is useably created, proxy_cache_lock has nothing to lock >> requests to? >> >> The code does not appear to be forming a "cache node" using the >> designated cache_key until the requested downstream element has >> completed transfer as you've noted? > > Your reading of the code is incorrect. > > A node in shared memory is created on a request start, and this is > enough for proxy_cache_lock to work. On the request completion, > the temporary file is placed into the cache directory, and the > node is updated to reflect that the cache file exists and can be > used. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jul 1 15:00:39 2014 From: nginx-forum at nginx.us (khav) Date: Tue, 01 Jul 2014 11:00:39 -0400 Subject: SSL slow on nginx In-Reply-To: <20140701135026.GK1849@mdounin.ru> References: <20140701135026.GK1849@mdounin.ru> Message-ID: <9a0394de2571788d94ff5510277534f6.NginxMailingListEnglish@forum.nginx.org> I am currently using 1024 bit dhparams for maximum compatibility Here is my ssllabs report : https://www.ssllabs.com/ssltest/analyze.html?d=filterbypass.me If i remove the DH from my cipher suites , will handshake simulation be still a success for all browsers listed in the ssllabs report above What is the best cipher suite according to you that is both fast and has maximum compatibility ? Thanks again Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251277,251396#msg-251396 From arut at nginx.com Tue Jul 1 16:28:07 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Tue, 1 Jul 2014 20:28:07 +0400 Subject: dav and dav_ext, mp4 module, PROPFIND not working for files In-Reply-To: <98631735e84d56149c25e66bb20b29b0.NginxMailingListEnglish@forum.nginx.org> References: <98631735e84d56149c25e66bb20b29b0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <447211D5-78D0-4A61-8B0A-DFD03CADBD39@nginx.com> On 01 Jul 2014, at 09:35, audvare wrote: > Roman Arutyunyan Wrote: > ------------------------------------------------------- >> >> Currently nginx does not seem to be able to do what you want. If >> you?re ready to patch >> the source here?s the patch fixing the issue. >> >> diff -r 0dd77ef9f114 src/http/modules/ngx_http_mp4_module.c >> --- a/src/http/modules/ngx_http_mp4_module.c Fri Jun 27 13:06:09 >> 2014 +0400 >> +++ b/src/http/modules/ngx_http_mp4_module.c Mon Jun 30 19:10:59 >> 2014 +0400 >> @@ -431,7 +431,7 @@ ngx_http_mp4_handler(ngx_http_request_t >> ngx_http_core_loc_conf_t *clcf; >> >> if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { >> - return NGX_HTTP_NOT_ALLOWED; >> + return NGX_DECLINED; >> } >> >> if (r->uri.data[r->uri.len - 1] == '/') { > > > Thanks. This works well. > > < HTTP/1.1 207 Multi-Status > > > > /video/avgn/t_screwattack_avgn_bugsbcc_901_gt.mp4 > > > ... > > Is there any change this will make it into upstream so I don't have to keep > on patching? > > Not that I mind that much because with Gentoo and user patches it is > extremely easy but I guess I would of course be concerned that the code may > change drastically such that the patch will stop working. Committing this into upstream is not planned. From mdounin at mdounin.ru Tue Jul 1 16:40:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 1 Jul 2014 20:40:01 +0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <632BE209-F6C5-4DA6-A078-C17878ED784B@comcast.net> References: <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> <20140701110159.GB1849@mdounin.ru> <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> <20140701132004.GI1849@mdounin.ru> <632BE209-F6C5-4DA6-A078-C17878ED784B@comcast.net> Message-ID: <20140701164001.GQ1849@mdounin.ru> Hello! On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote: > Then how could multiple streams and corresponding temp_files > ever be created upon successive requests for the same $uri with > "proxy_cache_key $uri" and "proxy_cache_lock on"; if all > subsequent requests are locked to the same cache_node created by > the first request even prior to its completion? Quoting documentation, http://nginx.org/r/proxy_cache_lock: : When enabled, only one request at a time will be allowed to : populate a new cache element identified according to the : proxy_cache_key directive by passing a request to a proxied : server. Other requests of the same cache element will either wait : for a response to appear in the cache or the cache lock for this : element to be released, up to the time set by the : proxy_cache_lock_timeout directive. So, there are at least two cases "prior to its completion" which are explicitly documented: 1. If the cache lock is released - this happens, e.g., if the response isn't cacheable according to the response headers. 2. If proxy_cache_lock_timeout expires. -- Maxim Dounin http://nginx.org/ From eswenson at intertrust.com Tue Jul 1 17:58:33 2014 From: eswenson at intertrust.com (Eric Swenson) Date: Tue, 1 Jul 2014 17:58:33 +0000 Subject: No CORS Workaround - SSL Proxy In-Reply-To: <20140622143216.GS1849@mdounin.ru> References: <20140620224626.GR1849@mdounin.ru> <20140622143216.GS1849@mdounin.ru> Message-ID: Hello Maxim, On 6/22/14, 7:32 AM, "Maxim Dounin" wrote: >If there is nothing in error logs, and you are getting 502 errors, >then there are two options: > >1. The 502 errors are returned by your backend, not generated by > nginx. > >2. You did something wrong while configuring error logs and/or you > are looking into a wrong log. > >In this particular case, I would suggest the latter. I?ve verified that my error log are configured fine ? I do get errors reported in my configured error log ? but nothing at the time that nginx returns 502 errors to the client. I?ve checked the upstream server?s logs, even when configured with debug logging, and never see any requests making it to the upstream server when nginx returns a 502 to the client. If the issue were with the upstream server, why is it that simply restarting nginx causes everything to proceed normally. I never have to touch the upstream server (which, by the way is serving other requests successfully from other proxies at the same time as the nginx proxy that returns 502s is doing do. ? Eric From schlie at comcast.net Tue Jul 1 20:11:58 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 1 Jul 2014 16:11:58 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <20140701164001.GQ1849@mdounin.ru> References: <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> <20140701110159.GB1849@mdounin.ru> <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> <20140701132004.GI1849@mdounin.ru> <632BE209-F6C5-4DA6-A078-C17878ED784B@comcast.net> <20140701164001.GQ1849@mdounin.ru> Message-ID: <8D080B12-6335-4156-B0E6-248D9E575DF0@comcast.net> Thank you for your patience. I mistakenly thought the 5 second default value associated with proxy_cache_lock_timeout was the maximum delay allowed between successive responses from the backend server is satisfaction of the reverse proxy request being cached prior to the cache lock being released, not the maximum delay for the response to be completely received and cached as it appears to actually be. Now that I understand, please consider setting the default value much higher, or more ideally set in proportion to the size of the item being cached and possibly some measure of the activity of the stream; as in most circumstances, redundant streams should never be opened, as it will tend to only make matters worse. Thank you. On Jul 1, 2014, at 12:40 PM, Maxim Dounin wrote: > On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote: >> Then how could multiple streams and corresponding temp_files >> ever be created upon successive requests for the same $uri with >> "proxy_cache_key $uri" and "proxy_cache_lock on"; if all >> subsequent requests are locked to the same cache_node created by >> the first request even prior to its completion? > > Quoting documentation, http://nginx.org/r/proxy_cache_lock: > > : When enabled, only one request at a time will be allowed to > : populate a new cache element identified according to the > : proxy_cache_key directive by passing a request to a proxied > : server. Other requests of the same cache element will either wait > : for a response to appear in the cache or the cache lock for this > : element to be released, up to the time set by the > : proxy_cache_lock_timeout directive. > > So, there are at least two cases "prior to its completion" which > are explicitly documented: > > 1. If the cache lock is released - this happens, e.g., if the > response isn't cacheable according to the response headers. > > 2. If proxy_cache_lock_timeout expires. > > -- > Maxim Dounin > http://nginx.org/ From schlie at comcast.net Tue Jul 1 21:03:47 2014 From: schlie at comcast.net (Paul Schlie) Date: Tue, 1 Jul 2014 17:03:47 -0400 Subject: How can the number of parallel/redundant open streams/temp_files be controlled/limited? In-Reply-To: <8D080B12-6335-4156-B0E6-248D9E575DF0@comcast.net> References: <1DCF2883-2016-4F93-A2F9-86C489C10EB4@comcast.net> <3BC79B6B-7799-4E64-A1CF-BC211B841ADF@comcast.net> <3C94B888-AD58-4F12-A4BF-973ACA342C81@comcast.net> <20140701013235.GX1849@mdounin.ru> <97511D6C-206C-4E3E-B5CC-66E4F985AC8F@comcast.net> <20140701110159.GB1849@mdounin.ru> <058EC580-A754-4218-996F-A8177F8E9552@comcast.net> <20140701132004.GI1849@mdounin.ru> <632BE209-F6C5-4DA6-A078-C17878ED784B@comcast.net> <20140701164001.GQ1849@mdounin.ru> <8D080B12-6335-4156-B0E6-248D9E575DF0@comcast.net> Message-ID: Lastly, is there any way to try to get proxy_store to work in combination with proxy_cache, possibly by enabling the completed temp_file to be saved as a proxy_store file within its uri logical path hierarchy, and the cache_file descriptor aliased to it, or visa versa? (As it's often nice to be able to view/access cached files within their natural uri hierarchy, being virtually impossible if stored using their corresponding hashed names alone; and not lose the benefit of being able to lock multiple pending requests to the same cache_node being fetched so as to minimize otherwise redundant down-stream requests prior to the file being cached.) On Jul 1, 2014, at 4:11 PM, Paul Schlie wrote: > Thank you for your patience. > > I mistakenly thought the 5 second default value associated with proxy_cache_lock_timeout was the maximum delay allowed between successive responses from the backend server is satisfaction of the reverse proxy request being cached prior to the cache lock being released, not the maximum delay for the response to be completely received and cached as it appears to actually be. > > Now that I understand, please consider setting the default value much higher, or more ideally set in proportion to the size of the item being cached and possibly some measure of the activity of the stream; as in most circumstances, redundant streams should never be opened, as it will tend to only make matters worse. > > Thank you. > > On Jul 1, 2014, at 12:40 PM, Maxim Dounin wrote: >> On Tue, Jul 01, 2014 at 10:15:47AM -0400, Paul Schlie wrote: >>> Then how could multiple streams and corresponding temp_files >>> ever be created upon successive requests for the same $uri with >>> "proxy_cache_key $uri" and "proxy_cache_lock on"; if all >>> subsequent requests are locked to the same cache_node created by >>> the first request even prior to its completion? >> >> Quoting documentation, http://nginx.org/r/proxy_cache_lock: >> >> : When enabled, only one request at a time will be allowed to >> : populate a new cache element identified according to the >> : proxy_cache_key directive by passing a request to a proxied >> : server. Other requests of the same cache element will either wait >> : for a response to appear in the cache or the cache lock for this >> : element to be released, up to the time set by the >> : proxy_cache_lock_timeout directive. >> >> So, there are at least two cases "prior to its completion" which >> are explicitly documented: >> >> 1. If the cache lock is released - this happens, e.g., if the >> response isn't cacheable according to the response headers. >> >> 2. If proxy_cache_lock_timeout expires. >> >> -- >> Maxim Dounin >> http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jul 1 22:39:37 2014 From: nginx-forum at nginx.us (gp) Date: Tue, 01 Jul 2014 18:39:37 -0400 Subject: peer closed connection in SSL handshake while SSL handshaking Message-ID: <44f8916b083cb5f93180eb84fefcb8de.NginxMailingListEnglish@forum.nginx.org> Hello, I am seeing an odd thing occur in the error logs. We are developing an API, and when our mobile devices first hit the nginx server after waking up, the mobile device is rejecting the ssl cert. In the logs, we see that the ssl handshake is being closed. [info] 1450#0: *16 peer closed connection in SSL handshake while SSL handshaking, client: IP, server: 0.0.0.0:443 Oddly enough, if we hit the API again (or any subsequent time before the device is turned off), this problem does not reoccur - only on the first access. The sites are configured pretty vanilla right now: server_name SERVERNAME; listen 443; ssl on; ssl_certificate ssl/newRSA.crt; ssl_certificate_key ssl/newRSA.key; root /www; index index.html index.htm index.php; If anybody has any pointers, that would be great. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251423,251423#msg-251423 From nginx-forum at nginx.us Tue Jul 1 22:40:31 2014 From: nginx-forum at nginx.us (gp) Date: Tue, 01 Jul 2014 18:40:31 -0400 Subject: peer closed connection in SSL handshake while SSL handshaking In-Reply-To: <44f8916b083cb5f93180eb84fefcb8de.NginxMailingListEnglish@forum.nginx.org> References: <44f8916b083cb5f93180eb84fefcb8de.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f50aeeb338e0a2f72eebe0ab823d569.NginxMailingListEnglish@forum.nginx.org> I forgot to mention that this is running on Ubuntu 12.04LTS, with nginx version: nginx/1.6.0. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251423,251424#msg-251424 From nginx-forum at nginx.us Tue Jul 1 23:45:58 2014 From: nginx-forum at nginx.us (jdewald) Date: Tue, 01 Jul 2014 19:45:58 -0400 Subject: changes to ngx.arg[1] not getting reflected in final response In-Reply-To: <05805c32b1bb5742ccd2b2eec92c5363.NginxMailingListEnglish@forum.nginx.org> References: <05805c32b1bb5742ccd2b2eec92c5363.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2d5b9478041210054e9a8e76c270401e.NginxMailingListEnglish@forum.nginx.org> vamshi Wrote: ------------------------------------------------------- > header_filter_by_lua ' > ngx.header.content_length = nil > ngx.header.set_cookie = nil > > if ngx.header.location then > local _location = ngx.header.location > _location = ngx.escape_uri(_location) > _location = "http://10.0.9.44/?_redir_=" .. > _location > ngx.header.location = _location > end > '; > > body_filter_by_lua ' > > local escUri = function (m) > local _esc = "href=\\"http://10.0.9.44/?_redir_=" > .. ngx.escape_uri(m[1]) .. "\\"" > print(_esc) > return _esc > end > > local chunk, eof = ngx.arg[1], ngx.arg[2] > local buffered = ngx.ctx.buffered > if not buffered then > buffered = {} > ngx.ctx.buffered = buffered > end > > if chunk ~= "" then > buffered[#buffered + 1] = chunk > ngx.arg[1] = nil > end > > if eof then > local whole = table.concat(buffered) > ngx.ctx.buffered = nil > local newStr, n, err = ngx.re.gsub(whole, > "href=\\"(.*)\\"", escUri, "i") > ngx.arg[1] = whole > print(whole) > end > '; > ... > > > As you can see, print(_esc) show that the URL was successfully > URLencoded. Yet, the print(whole) line does not reflect the gsub() > > What could be issue here ? > > -Vamshi gsub is going to return the results of the substitution, not do it inline. You should be outputting/assigning newStr not whole. Cheers, Josh Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251248,251425#msg-251425 From kurt at x64architecture.com Wed Jul 2 00:29:49 2014 From: kurt at x64architecture.com (Kurt Cancemi) Date: Tue, 1 Jul 2014 20:29:49 -0400 Subject: peer closed connection in SSL handshake while SSL handshaking In-Reply-To: <3f50aeeb338e0a2f72eebe0ab823d569.NginxMailingListEnglish@forum.nginx.org> References: <44f8916b083cb5f93180eb84fefcb8de.NginxMailingListEnglish@forum.nginx.org> <3f50aeeb338e0a2f72eebe0ab823d569.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, Could your issue be caused by this bug . It looks like Ubuntu is not going to fix this bug in precise. Also see here . In the previous link the person has the same problem and resolved it by downgrading openssl. There are a few solutions if you think this is your problem. (This is a bug in OpenSSL that has been fixed in later versions.) 1. Upgrade your system openssl library. (I wouldn't recommend doing that though as it may break other packages.) 2. Compile nginx with the latest openssl library. (Negative is that you have to maintain your own packages and monitor for openssl security vulnerabilities.) 3. Upgrade your Linux distribution to 14.04 LTS. --- Kurt Cancemi http://www.getwnmp.org -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 2 10:37:17 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Jul 2014 14:37:17 +0400 Subject: No CORS Workaround - SSL Proxy In-Reply-To: References: <20140620224626.GR1849@mdounin.ru> <20140622143216.GS1849@mdounin.ru> Message-ID: <20140702103717.GU1849@mdounin.ru> Hello! On Tue, Jul 01, 2014 at 05:58:33PM +0000, Eric Swenson wrote: > On 6/22/14, 7:32 AM, "Maxim Dounin" wrote: > > >If there is nothing in error logs, and you are getting 502 errors, > >then there are two options: > > > >1. The 502 errors are returned by your backend, not generated by > > nginx. > > > >2. You did something wrong while configuring error logs and/or you > > are looking into a wrong log. > > > >In this particular case, I would suggest the latter. > > I?ve verified that my error log are configured fine ? I do get errors > reported in my configured error log ? but nothing at the time that nginx > returns 502 errors to the client. As nginx has lots of options to control error logging (logging level based filtering, as well as per-server and per-location error logs), it's not enough to check that some errors are logged to make sure logging is configured properly. Simplest way to configure logs properly is to comment out all error_log directives, and add error_log at global level, with desired logging level. In this case, logging at "error" level should be enough, i.e., the following should do the trick: error_log /path/to/log error; (At global level, i.e., at the top of your nginx.conf file. And don't forget to comment out all other error_log directives.) > I?ve checked the upstream server?s logs, even when configured with debug > logging, and never see any requests making it to the upstream server when > nginx returns a 502 to the client. > > If the issue were with the upstream server, why is it that simply > restarting nginx causes everything to proceed normally. I never have to > touch the upstream server (which, by the way is serving other requests > successfully from other proxies at the same time as the nginx proxy that > returns 502s is doing do. The fact that reastarting nginx fixes things indicates that the problem is likely caused by connections already established by nginx to the upstream server in question. And restarting nginx fixes things by closing these connections. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jul 2 11:26:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 2 Jul 2014 15:26:26 +0400 Subject: No CORS Workaround - SSL Proxy In-Reply-To: <20140702103717.GU1849@mdounin.ru> References: <20140620224626.GR1849@mdounin.ru> <20140622143216.GS1849@mdounin.ru> <20140702103717.GU1849@mdounin.ru> Message-ID: <20140702112626.GW1849@mdounin.ru> Hello! On Wed, Jul 02, 2014 at 02:37:17PM +0400, Maxim Dounin wrote: > Hello! > > On Tue, Jul 01, 2014 at 05:58:33PM +0000, Eric Swenson wrote: > > > On 6/22/14, 7:32 AM, "Maxim Dounin" wrote: > > > > >If there is nothing in error logs, and you are getting 502 errors, > > >then there are two options: > > > > > >1. The 502 errors are returned by your backend, not generated by > > > nginx. > > > > > >2. You did something wrong while configuring error logs and/or you > > > are looking into a wrong log. > > > > > >In this particular case, I would suggest the latter. > > > > I?ve verified that my error log are configured fine ? I do get errors > > reported in my configured error log ? but nothing at the time that nginx > > returns 502 errors to the client. > > As nginx has lots of options to control error logging (logging > level based filtering, as well as per-server and per-location > error logs), it's not enough to check that some errors are logged > to make sure logging is configured properly. > > Simplest way to configure logs properly is to comment out all > error_log directives, and add error_log at global level, with > desired logging level. In this case, logging at "error" level > should be enough, i.e., the following should do the trick: > > error_log /path/to/log error; > > (At global level, i.e., at the top of your nginx.conf file. And > don't forget to comment out all other error_log directives.) Correction: in case of proxy to https, there is one case when the error is reported at "info" level ("peer closed connection in SSL handshake"), so "info" level logging is needed to see it. (This looks like a bug and should be fixed, while talking to upstream servers proper logging level for SSL handshake errors is "error".) > > I?ve checked the upstream server?s logs, even when configured with debug > > logging, and never see any requests making it to the upstream server when > > nginx returns a 502 to the client. > > > > If the issue were with the upstream server, why is it that simply > > restarting nginx causes everything to proceed normally. I never have to > > touch the upstream server (which, by the way is serving other requests > > successfully from other proxies at the same time as the nginx proxy that > > returns 502s is doing do. > > The fact that reastarting nginx fixes things indicates that the > problem is likely caused by connections already established by > nginx to the upstream server in question. And restarting nginx > fixes things by closing these connections. If the problem is indeed during SSL handshake, it may be caused by a cached session peer starts to dislike for some reason. Switching off "proxy_ssl_session_reuse" could help. See also this answer, which may be related: http://mailman.nginx.org/pipermail/nginx/2014-July/044329.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 2 13:02:59 2014 From: nginx-forum at nginx.us (gp) Date: Wed, 02 Jul 2014 09:02:59 -0400 Subject: peer closed connection in SSL handshake while SSL handshaking In-Reply-To: References: Message-ID: <8ec1890193f9fbf75ef90a60706aface.NginxMailingListEnglish@forum.nginx.org> Thanks for the reply. I realized this morning that this server is actually running Debian Stable, not Ubuntu. I don't think that I can downgrade the openssl package, because that would open me to heartbleed vulnerabilities. I will try standing up a dev server on Debian Testing to see if the newer openssl package fixes this issue. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251423,251437#msg-251437 From nginx-forum at nginx.us Thu Jul 3 03:26:54 2014 From: nginx-forum at nginx.us (eiji-gravion) Date: Wed, 02 Jul 2014 23:26:54 -0400 Subject: nginx caching headers Message-ID: <9c7e1dc2a7196663ea2a9d5f776ed8a6.NginxMailingListEnglish@forum.nginx.org> Hello, Are there any specific reasons why nginx has both ETags and the Last Modified headers being sent? From my understanding, this is a bit redundant for most situations. Thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251441,251441#msg-251441 From nginx-forum at nginx.us Thu Jul 3 07:41:11 2014 From: nginx-forum at nginx.us (gwilym) Date: Thu, 03 Jul 2014 03:41:11 -0400 Subject: URI escaping for X-Accel-Redirect and proxy_pass in 1.4.7 and 1.6.0 In-Reply-To: References: Message-ID: <639da6b85203c70f47dfe89eb6b4b96c.NginxMailingListEnglish@forum.nginx.org> Jonathan Matthews Wrote: ------------------------------------------------------- > On 17 June 2014 07:49, gwilym wrote: > > The workaround is to _double_ encode so as to send back > > "image%2520with%2520spaces.jpg" to Nginx but we can't roll this out > until > > Nginx 1.6 because it breaks 1.4... but we can't roll out 1.6 until > the code > > is there. > > I don't have a nice fix for you I'm afraid! However, as a way to get > out of your chicken-and-egg upgrade problem, could you pass a static > header containing the nginx version to your backend, and get it to > switch its X-Accel-Redirect response based on this value? > > J This was the only clean, cross-version solution I could think of too and would recommend to others. We didn't end up going with it though as we ended up having to fast track the 1.6 rollout for other reasons, so we took a brief window of errors on the chin and cleaned them up later. Thanks, though. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250909,251442#msg-251442 From mdounin at mdounin.ru Thu Jul 3 09:45:18 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 3 Jul 2014 13:45:18 +0400 Subject: nginx caching headers In-Reply-To: <9c7e1dc2a7196663ea2a9d5f776ed8a6.NginxMailingListEnglish@forum.nginx.org> References: <9c7e1dc2a7196663ea2a9d5f776ed8a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140703094518.GG1849@mdounin.ru> Hello! On Wed, Jul 02, 2014 at 11:26:54PM -0400, eiji-gravion wrote: > Are there any specific reasons why nginx has both ETags and the Last > Modified headers being sent? From my understanding, this is a bit redundant > for most situations. This is required to support both clients using Last-Modified as a cache validator, and ETag as a cache validator. As nginx doesn't know what a client will use, it returns both. (And this is identical to what other servers out there do.) Originally ETag support was added to support download resumption in IE9, it needs strong entity tags to be able to resume downloads using range requests. And obviously there are lots of clients which don't support ETag. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jul 4 01:31:04 2014 From: nginx-forum at nginx.us (dandv) Date: Thu, 03 Jul 2014 21:31:04 -0400 Subject: Forward proxy preserving the domain Message-ID: Basically, I want to set up a proxy running on example.mydomain.com that will take any URI, retrieve `example.com$request_uri`, and pass it on to the client, preserving my example.mydomain.com domain for the client. So far I have this config: server { server_name example.mydomain.com; location / { resolver 8.8.8.8; # why exactly is this necessary? proxy_pass http://example.com$request_uri; } } It works, but what happens is that nginx returns a `HTTP/1.1 301 Moved Permanently` response, with `Location` set to http://example.com$request_uri. How is this different from the rewrite directive, or from `return 301 http://example.com$request_uri` ? I want instead a straight response of the actual contents at http://example.com$request_uri. How can I do that? PS: what in the world are the formatting codes for this forum? [code]...[/code] doesn't work; not even [b]...[/b] does. Yeah, sorry for not indenting my code above - no idea how to do it. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251449,251449#msg-251449 From aflexzor at gmail.com Fri Jul 4 03:51:44 2014 From: aflexzor at gmail.com (aflexzor) Date: Thu, 3 Jul 2014 21:51:44 -0600 Subject: limit_conn_zone applied to Proxy_Pass (outgoing requests) Message-ID: Hello! I have an nginx reverse proxy it has a series of filters against DDoS attacks. As a last resort I need to make sure that I NEVER send more than x concurrent requests to the backend server (Proxy_pass) Is it possible to apply limit_conn_zone for outgoing requests? If so could i have an example? Thanks. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From aflexzor at gmail.com Fri Jul 4 04:09:44 2014 From: aflexzor at gmail.com (aflexzor) Date: Thu, 3 Jul 2014 22:09:44 -0600 Subject: alternative to fail2ban as complement to nginx Message-ID: Hello guys, Iam using several mechanisms to rate limit abusers...but I would like to ban those that do it repeatedly. Anybody use anything better than fail2ban ? Iam worried that it will hog down the server if it gets a big amount of logs. Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Fri Jul 4 07:11:27 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 04 Jul 2014 00:11:27 -0700 Subject: limit_conn_zone applied to Proxy_Pass (outgoing requests) In-Reply-To: References: Message-ID: <53B6539F.8090104@fearnothingproductions.net> Any reason this needs to be applied specifically to /outgoing/ connections? Is the default behavior applied to the proxy not sufficient? On 7/3/2014 20:51, aflexzor wrote: > Hello! > > I have an nginx reverse proxy it has a series of filters against DDoS > attacks. > > As a last resort I need to make sure that I NEVER send more than x > concurrent requests to the backend server (Proxy_pass) > > Is it possible to apply limit_conn_zone for outgoing requests? If so > could i have an example? Thanks. > > Alex > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Jul 4 09:19:41 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 04 Jul 2014 13:19:41 +0400 Subject: limit_conn_zone applied to Proxy_Pass (outgoing requests) In-Reply-To: References: Message-ID: <2062874.fu9iojs06E@vbart-laptop> On Thursday 03 July 2014 21:51:44 aflexzor wrote: > Hello! > > I have an nginx reverse proxy it has a series of filters against DDoS > attacks. > > As a last resort I need to make sure that I NEVER send more than x > concurrent requests to the backend server (Proxy_pass) > > Is it possible to apply limit_conn_zone for outgoing requests? If so could > i have an example? Thanks. > Look at the "max_conn" parameter of the "server" directive in the "upstream" block: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#server Note that it's part of the commercial version of nginx. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jul 4 13:48:04 2014 From: nginx-forum at nginx.us (ura) Date: Fri, 04 Jul 2014 09:48:04 -0400 Subject: difficulty adding headers In-Reply-To: <20140701003345.GS1849@mdounin.ru> References: <20140701003345.GS1849@mdounin.ru> Message-ID: the config i am using is inherited from the designers of the elgg platform and i have explored it enough to know most of what it is doing. perhaps i need to replace the location block that targets .php files with one that explicitly lists all the possible locations of php files instead... which would leave the possibility open for the new 'stream' location block i am using here. i was using the internal keyword since i am wanting to do what i can to ensure that the video files are only available to site visitors once they have passed a security process (handled by php and sql). the site here is a social network and all media items have an associated privacy level - so the files cannot all be public, which is the origin of these issues. the files are held outside of the nginx site root directory for this reason. > Streaming videos with php is silly, so recommended approach is > "Don't do that". how else can a video be served from a PHP social networking app, where the file needs to be 'behind' privacy/security checks? wouldn't there need to be some kind of PHP processing just to set up the externally accessible path for the video file (after the viewer's credentials have been checked)? to be clear, i am not saying that the final path for the video is a .php file.. the final path would be filename.mp4.. however, the php location block is triggered along the processing path anyway - as the config is. (i am using the web interface for forum.nginx.org.. so this is a mailing list and also a forum). ;) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251236,251467#msg-251467 From nginx-forum at nginx.us Fri Jul 4 14:55:38 2014 From: nginx-forum at nginx.us (rmajasol) Date: Fri, 04 Jul 2014 10:55:38 -0400 Subject: Excluding some URIs from SSL traffic Message-ID: hi i would exclude given URLs from my nginx SSL traffic. This is my current nginx.conf http://p.ngx.cc/fa thanks Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251468,251468#msg-251468 From mdounin at mdounin.ru Fri Jul 4 16:35:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Jul 2014 20:35:29 +0400 Subject: difficulty adding headers In-Reply-To: References: <20140701003345.GS1849@mdounin.ru> Message-ID: <20140704163529.GT1849@mdounin.ru> Hello! On Fri, Jul 04, 2014 at 09:48:04AM -0400, ura wrote: > the config i am using is inherited from the designers of the elgg platform > and i have explored it enough to know most of what it is doing. > perhaps i need to replace the location block that targets .php files with > one that explicitly lists all the possible locations of php files instead... > which would leave the possibility open for the new 'stream' location block i > am using here. > > i was using the internal keyword since i am wanting to do what i can to > ensure that the video files are only available to site visitors once they > have passed a security process (handled by php and sql). the site here is a > social network and all media items have an associated privacy level - so the > files cannot all be public, which is the origin of these issues. the files > are held outside of the nginx site root directory for this reason. Sure. The problem is that you try to "navigate to www.mysite.tld/stream/blah.mp4", and it won't work if the location in question is internal. > > Streaming videos with php is silly, so recommended approach is > > "Don't do that". > > how else can a video be served from a PHP social networking app, where the > file needs to be 'behind' privacy/security checks? > wouldn't there need to be some kind of PHP processing just to set up the > externally accessible path for the video file (after the viewer's > credentials have been checked)? > to be clear, i am not saying that the final path for the video is a .php > file.. the final path would be filename.mp4.. however, the php location > block is triggered along the processing path anyway - as the config is. There are more than one way to do security checks. Most obvious ones is to use either X-Accel-Redirect, or auth_request, or secure_link. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sat Jul 5 16:29:18 2014 From: nginx-forum at nginx.us (Mayhem30) Date: Sat, 05 Jul 2014 12:29:18 -0400 Subject: Setting Allow / Deny Rules + Keep processing other location rules? Message-ID: I'm having a heck of a time setting allow / deny rules for certain directories and files + getting Nginx to keep handling other location rules. For example, I want to only allow my IP address access to the WordPress login page: location ~* wp-login.php { allow 22.131.12.14; deny all; } # Pass off php requests to Apache location ~* \.php$ { include /usr/local/etc/nginx/proxypass.conf; proxy_pass http://127.0.0.1:80; } When I visit the WordPress login page, all I see is the raw PHP code. How can I set up the first location block to allow me to add allow / deny rules, charset, headers, etc + have PHP files still processed by the second location block? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251473,251473#msg-251473 From mdounin at mdounin.ru Sat Jul 5 20:43:55 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 Jul 2014 00:43:55 +0400 Subject: Setting Allow / Deny Rules + Keep processing other location rules? In-Reply-To: References: Message-ID: <20140705204355.GX1849@mdounin.ru> Hello! On Sat, Jul 05, 2014 at 12:29:18PM -0400, Mayhem30 wrote: > I'm having a heck of a time setting allow / deny rules for certain > directories and files + getting Nginx to keep handling other location > rules. > > For example, I want to only allow my IP address access to the WordPress > login page: > > location ~* wp-login.php { > allow 22.131.12.14; > deny all; > } > > # Pass off php requests to Apache > location ~* \.php$ { > include /usr/local/etc/nginx/proxypass.conf; > proxy_pass http://127.0.0.1:80; > } > > When I visit the WordPress login page, all I see is the raw PHP code. > > How can I set up the first location block to allow me to add allow / deny > rules, charset, headers, etc + have PHP files still processed by the second > location block? To process a request nginx will select only one location block. Therefore, the only way is to specify full configuration in a single location block. E.g., in this particular case you have to write something like this: location ~* wp-login.php { allow 22.131.12.14; deny all; include /usr/local/etc/nginx/proxypass.conf; proxy_pass http://127.0.0.1:80; } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Jul 6 02:47:43 2014 From: nginx-forum at nginx.us (justink101) Date: Sat, 05 Jul 2014 22:47:43 -0400 Subject: Send 502 when all php-fpm workers are in use Message-ID: <6da747b34f719a36d358a9b47fdbcef8.NginxMailingListEnglish@forum.nginx.org> I have a php-fpm pool of workers which is 6. There are long running requests being sent, so I have the following fastcgi directives set: fastcgi_connect_timeout 15; fastcgi_send_timeout 1200; fastcgi_read_timeout 1200; However right now, if the php-fpm pool of workers is full, a request waits the full 20 minutes. I'd like requests to fail with a 502 status code if the php-fpm pool of workers is full instead. This change should still allow long running requests (max 20 minutes) though. I would have thought if the php-fpm pool workers are all being used, a request would timeout in 15 seconds according to fastcgi_connect_timeout, but this does not seem to be the case. Thanks for the help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251476,251476#msg-251476 From mdounin at mdounin.ru Sun Jul 6 11:20:07 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 6 Jul 2014 15:20:07 +0400 Subject: Send 502 when all php-fpm workers are in use In-Reply-To: <6da747b34f719a36d358a9b47fdbcef8.NginxMailingListEnglish@forum.nginx.org> References: <6da747b34f719a36d358a9b47fdbcef8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140706112007.GY1849@mdounin.ru> Hello! On Sat, Jul 05, 2014 at 10:47:43PM -0400, justink101 wrote: > I have a php-fpm pool of workers which is 6. There are long running requests > being sent, so I have the following fastcgi directives set: > > fastcgi_connect_timeout 15; > fastcgi_send_timeout 1200; > fastcgi_read_timeout 1200; > > However right now, if the php-fpm pool of workers is full, a request waits > the full 20 minutes. I'd like requests to fail with a 502 status code if the > php-fpm pool of workers is full instead. This change should still allow long > running requests (max 20 minutes) though. I would have thought if the > php-fpm pool workers are all being used, a request would timeout in 15 > seconds according to fastcgi_connect_timeout, but this does not seem to be > the case. >From nginx side a connection in a backend listen queue isn't distinguishable from an accepted connection, hence fastcgi_connect_timeout doesn't apply as long as a backend is reacheable and it's listen queue isn't full. (And fastcgi_send_timeout doesn't apply either if a request is small enough to fit into socket send buffer.) To reduce a number of affected requests you may consider using smaller backlog in php-fpm, see here: http://www.php.net/manual/en/install.fpm.configuration.php#listen-backlog -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Jul 6 12:06:15 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 06 Jul 2014 08:06:15 -0400 Subject: Send 502 when all php-fpm workers are in use In-Reply-To: <6da747b34f719a36d358a9b47fdbcef8.NginxMailingListEnglish@forum.nginx.org> References: <6da747b34f719a36d358a9b47fdbcef8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5520c26b0947108a57b71a42d87ca633.NginxMailingListEnglish@forum.nginx.org> The only way around this would be some kind of counter keeping track of whats available and if max is reached create a file where you test on in a nginx config, maybe Lua can do this counting part. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251476,251478#msg-251478 From nginx-forum at nginx.us Mon Jul 7 02:14:42 2014 From: nginx-forum at nginx.us (justink101) Date: Sun, 06 Jul 2014 22:14:42 -0400 Subject: Send 502 when all php-fpm workers are in use In-Reply-To: <20140706112007.GY1849@mdounin.ru> References: <20140706112007.GY1849@mdounin.ru> Message-ID: <8e105de3109e7af64b341c4ae46418ba.NginxMailingListEnglish@forum.nginx.org> Maxim, If I set the php-fpm pool listen.backlog to 0, will this accomplish what I want. I.e. fill up workers, once all the workers are used, fail requests. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251476,251488#msg-251488 From ron at tohuw.net Mon Jul 7 06:39:32 2014 From: ron at tohuw.net (Ron Scott-Adams) Date: Mon, 7 Jul 2014 02:39:32 -0400 Subject: Hosting a web application from a subsite Message-ID: <43D424DD-1A33-44E2-8358-0AA3AB7AAA53@tohuw.net> I?m not having much luck trying to configure this site the way I want. I?m modifying http://wiki.nginx.org/Piwik to suit a case in which it is served out of a subsite location, e.g. example.com/stats. I?ve created 2 configuration files. One is included outside the server sections of the main site?s configuration file: http://paste2.org/e8HcPkda The second is included from the server 443 section of the site: http://paste2.org/ts8j5VWK I was attempting to deliver this via proxy_pass, but it appears I?ve failed: the error log throws: [error] 30196#0: *6289 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 185.47.241.122, server: tohuw.net, request: "GET /stats/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:8001", host: "tohuw.net" From mdounin at mdounin.ru Mon Jul 7 11:56:31 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Jul 2014 15:56:31 +0400 Subject: Send 502 when all php-fpm workers are in use In-Reply-To: <8e105de3109e7af64b341c4ae46418ba.NginxMailingListEnglish@forum.nginx.org> References: <20140706112007.GY1849@mdounin.ru> <8e105de3109e7af64b341c4ae46418ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140707115631.GB1849@mdounin.ru> Hello! On Sun, Jul 06, 2014 at 10:14:42PM -0400, justink101 wrote: > Maxim, > > If I set the php-fpm pool listen.backlog to 0, will this accomplish what I > want. I.e. fill up workers, once all the workers are used, fail requests. Note that such a low value will have a downside of not tolerating connection spikes, so you may want to actually use something slightly bigger. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jul 7 13:10:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Jul 2014 17:10:38 +0400 Subject: Hosting a web application from a subsite In-Reply-To: <43D424DD-1A33-44E2-8358-0AA3AB7AAA53@tohuw.net> References: <43D424DD-1A33-44E2-8358-0AA3AB7AAA53@tohuw.net> Message-ID: <20140707131038.GD1849@mdounin.ru> Hello! On Mon, Jul 07, 2014 at 02:39:32AM -0400, Ron Scott-Adams wrote: > I?m not having much luck trying to configure this site the way I want. I?m modifying http://wiki.nginx.org/Piwik to suit a case in which it is served out of a subsite location, e.g. example.com/stats. > > I?ve created 2 configuration files. One is included outside the server sections of the main site?s configuration file: http://paste2.org/e8HcPkda > > The second is included from the server 443 section of the site: http://paste2.org/ts8j5VWK > > I was attempting to deliver this via proxy_pass, but it appears I?ve failed: the error log throws: > [error] 30196#0: *6289 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 185.47.241.122, server: tohuw.net, request: "GET /stats/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:8001", host: "tohuw.net" You are connecting with fastcgi_pass (which talks FastCGI protocol) to an HTTP backend server, this won't work. To talk to HTTP backends you have to use proxy_pass (note "proxy", not "fastcgi"). -- Maxim Dounin http://nginx.org/ From ron at tohuw.net Mon Jul 7 18:04:13 2014 From: ron at tohuw.net (Ron Scott-Adams) Date: Mon, 7 Jul 2014 14:04:13 -0400 Subject: Hosting a web application from a subsite In-Reply-To: <20140707131038.GD1849@mdounin.ru> References: <43D424DD-1A33-44E2-8358-0AA3AB7AAA53@tohuw.net> <20140707131038.GD1849@mdounin.ru> Message-ID: <0E049BBC-55E9-49E9-88E8-E469FC686BA4@tohuw.net> Ah! That makes a good deal of sense. I feel silly; of course Fastcgi is the wrong way to go here. I?ve rewritten it successfully and it all works now. Thanks Maxim! http://paste2.org/NaV3U3YU http://paste2.org/cLGfIEGV On Jul 7, 2014, at 9:10 AM, Maxim Dounin wrote: > Hello! > > On Mon, Jul 07, 2014 at 02:39:32AM -0400, Ron Scott-Adams wrote: > >> I?m not having much luck trying to configure this site the way I want. I?m modifying http://wiki.nginx.org/Piwik to suit a case in which it is served out of a subsite location, e.g. example.com/stats. >> >> I?ve created 2 configuration files. One is included outside the server sections of the main site?s configuration file: http://paste2.org/e8HcPkda >> >> The second is included from the server 443 section of the site: http://paste2.org/ts8j5VWK >> >> I was attempting to deliver this via proxy_pass, but it appears I?ve failed: the error log throws: >> [error] 30196#0: *6289 upstream sent unsupported FastCGI protocol version: 72 while reading response header from upstream, client: 185.47.241.122, server: tohuw.net, request: "GET /stats/ HTTP/1.1", upstream: "fastcgi://127.0.0.1:8001", host: "tohuw.net" > > You are connecting with fastcgi_pass (which talks FastCGI > protocol) to an HTTP backend server, this won't work. To talk to > HTTP backends you have to use proxy_pass (note "proxy", not "fastcgi"). > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at nginx.us Tue Jul 8 00:55:46 2014 From: nginx-forum at nginx.us (justink101) Date: Mon, 07 Jul 2014 20:55:46 -0400 Subject: Send 502 when all php-fpm workers are in use In-Reply-To: <20140707115631.GB1849@mdounin.ru> References: <20140707115631.GB1849@mdounin.ru> Message-ID: Starting php-fpm: [07-Jul-2014 17:52:33] WARNING: [pool app-execute] listen.backlog(0) was too low for the ondemand process manager. I updated it for you to 128 Well that is unfortunate, not sure why using on-demand required a backlog of 128. Essentially this php-fpm pool runs jobs then the workers automatically exit. So essentially they spawn run and die. pm = ondemand pm.max_children = 100 pm.process_idle_timeout = 3s; Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251476,251519#msg-251519 From maxxer at ufficyo.com Tue Jul 8 08:35:11 2014 From: maxxer at ufficyo.com (Lorenzo Milesi) Date: Tue, 8 Jul 2014 10:35:11 +0200 (CEST) Subject: Serve (almost) all index.php request by /index.php In-Reply-To: <1721550714.136869.1404808054522.JavaMail.zimbra@yetopen.it> Message-ID: <1768537276.136910.1404808511364.JavaMail.zimbra@yetopen.it> Hi. Is it possible to have all index.php request being processed by /index.php? I have a Joomla+VirtueMart website I recently moved from apache to nginx (can't compare performances, nginx is ages faster and has far far less impact on the server itself! Congratulations!!)- With the old setup urls like /something/index.php were forwarded to file /index.php. This is apache's htaccess: Options +FollowSymLinks RewriteEngine On RewriteCond %{QUERY_STRING} (^|&)file_id=([^&]*)(&|$) RewriteCond %{REQUEST_URI} !/oldfiles RewriteRule .* http://www.supersamastore.it/oldfiles/index.php [L,R=301] RewriteCond %{QUERY_STRING} base64_encode[^(]*\([^)]*\) [OR] RewriteCond %{QUERY_STRING} (<|%3C)([^s]*s)+cript.*(>|%3E) [NC,OR] RewriteCond %{QUERY_STRING} GLOBALS(=|\[|\%[0-9A-Z]{0,2}) [OR] RewriteCond %{QUERY_STRING} _REQUEST(=|\[|\%[0-9A-Z]{0,2}) RewriteRule .* index.php [F] RewriteRule .* - [E=HTTP_AUTHORIZATION:%{HTTP:Authorization}] RewriteCond %{REQUEST_URI} !^/index\.php RewriteCond %{REQUEST_URI} /component/|(/[^.]*|\.(php|html?|feed|pdf|vcf|raw))$ [NC] RewriteCond %{REQUEST_FILENAME} !-f RewriteCond %{REQUEST_FILENAME} !-d RewriteCond %{REQUEST_URI} !=/server-status RewriteRule .* index.php [L] Now with nginx it gives 404 not found. Is it possible to do such config? all request requiring index.php and when the file doesn't exist on disk it should be served by /index.php. Thanks This is my current config: server { listen 81 default_server; root /var/www/nginx-default; index index.php index.html index.htm; client_max_body_size 3M; # Make site accessible from http://localhost/ server_name localhost; location / { try_files $uri $uri/ /index.php?$args; } # deny running scripts inside writable directories location ~* /(images|cache|media|logs|tmp)/.*\.(php|pl|py|jsp|asp|sh|cgi)$ { return 403; error_page 403 /403_error.html; } location ~ .php$ { fastcgi_split_path_info ^(.+\.php)(.*)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_intercept_errors on; fastcgi_ignore_client_abort off; fastcgi_connect_timeout 60; fastcgi_send_timeout 180; fastcgi_read_timeout 180; fastcgi_buffer_size 128k; fastcgi_buffers 4 256k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; } # deny access to .htaccess files, if Apache's document root # concurs with nginx's one # location ~ /\.ht { deny all; } # caching of files location ~* \.(ico|pdf|flv)$ { expires 30d; } location ~* \.(js|css|png|jpg|jpeg|gif|swf|xml|txt)$ { expires 14d; } } -- Lorenzo Milesi - lorenzo.milesi at yetopen.it YetOpen S.r.l. - http://www.yetopen.it/ From nginx-forum at nginx.us Tue Jul 8 10:34:12 2014 From: nginx-forum at nginx.us (TheBritishGeek) Date: Tue, 08 Jul 2014 06:34:12 -0400 Subject: proxy_cache using a custom Agent Message-ID: Can anyone help me find a way to set the Agent header for accessing the upstream server when using a proxy_cache setup. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251534,251534#msg-251534 From daniel at serverb.co.uk Tue Jul 8 11:09:57 2014 From: daniel at serverb.co.uk (Daniel Lintott) Date: Tue, 08 Jul 2014 12:09:57 +0100 Subject: Weird issue with relative links Message-ID: <53BBD185.3020508@serverb.co.uk> Hi, I am fairly new to nginx but appear to have it working well... along with php-fpm. Working on a PHP script that uses slash arguments I'm hitting an odd problem. I am able to retrieve the argument correctly and this works fine in the script. Where my issue lies is with the links that are then displayed. The script is at: http://alpha.serverb.co.uk/debian/parser.php/gns-3 The filelist links on the page should are all relative. Testing on my local Apache server, this works perfectly. The links are like this: http://webdev.internal.serverb.co.uk/debian/parser.php/gns-3/GNS3-0.8.7-src.zip This is correct... the link includes the php script, slash argument and the file name. Now on nginx... it is returned different.. the links are missing the first slash argument so appear as: http://alpha.serverb.co.uk/debian/parser.php/GNS3-0.8.7-src.zip I have checked the values set by fastcgi, and these all appear to match what Apache returns... so I'm stumped! I know I can get around this by changing the links... but that isn't an option as the page is later parsed by other scripts and should be backwards compatible with the previous version. Any help would be most welcome Regards Daniel Lintott -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From francis at daoine.org Tue Jul 8 11:36:27 2014 From: francis at daoine.org (Francis Daly) Date: Tue, 8 Jul 2014 12:36:27 +0100 Subject: Weird issue with relative links In-Reply-To: <53BBD185.3020508@serverb.co.uk> References: <53BBD185.3020508@serverb.co.uk> Message-ID: <20140708113627.GN16942@daoine.org> On Tue, Jul 08, 2014 at 12:09:57PM +0100, Daniel Lintott wrote: Hi there, > The script is at: > http://alpha.serverb.co.uk/debian/parser.php/gns-3 http://alpha.serverb.co.uk/debian/parser.php/gns-3 and http://alpha.serverb.co.uk/debian/parser.php/gns-3/ are different urls, especially when it comes to resolving relative links. What is the response you get to a "curl -v" request for the nginx url and the equivalent apache url? I suspect that your apache is configured to issue a redirect and your nginx is not. Copy-paste the first 20 lines of the responses, if the fix is not clear. f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Tue Jul 8 11:38:29 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jul 2014 15:38:29 +0400 Subject: proxy_cache using a custom Agent In-Reply-To: References: Message-ID: <20140708113829.GO1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 06:34:12AM -0400, TheBritishGeek wrote: > Can anyone help me find a way to set the Agent header for accessing the > upstream server when using a proxy_cache setup. http://nginx.org/r/proxy_set_header -- Maxim Dounin http://nginx.org/ From daniel at serverb.co.uk Tue Jul 8 12:03:17 2014 From: daniel at serverb.co.uk (Daniel Lintott) Date: Tue, 08 Jul 2014 13:03:17 +0100 Subject: Weird issue with relative links In-Reply-To: <20140708113627.GN16942@daoine.org> References: <53BBD185.3020508@serverb.co.uk> <20140708113627.GN16942@daoine.org> Message-ID: <53BBDE05.9040308@serverb.co.uk> On 08/07/14 12:36, Francis Daly wrote: > On Tue, Jul 08, 2014 at 12:09:57PM +0100, Daniel Lintott wrote: > > Hi there, > >> The script is at: >> http://alpha.serverb.co.uk/debian/parser.php/gns-3 > > http://alpha.serverb.co.uk/debian/parser.php/gns-3 and > http://alpha.serverb.co.uk/debian/parser.php/gns-3/ are different urls, > especially when it comes to resolving relative links. > > What is the response you get to a "curl -v" request for the nginx url > and the equivalent apache url? > > I suspect that your apache is configured to issue a redirect and your > nginx is not. > > Copy-paste the first 20 lines of the responses, if the fix is not clear. > > f > Hmmm... now I've confused myself! Both are now returning the same... minus the slash argument! Seems like it may have been my error in copying the files to the server... A classic case of PEBKAC! Daniel -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 897 bytes Desc: OpenPGP digital signature URL: From mdounin at mdounin.ru Tue Jul 8 13:45:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jul 2014 17:45:27 +0400 Subject: nginx-1.7.3 Message-ID: <20140708134527.GT1849@mdounin.ru> Changes with nginx 1.7.3 08 Jul 2014 *) Feature: weak entity tags are now preserved on response modifications, and strong ones are changed to weak. *) Feature: cache revalidation now uses If-None-Match header if possible. *) Feature: the "ssl_password_file" directive. *) Bugfix: the If-None-Match request header line was ignored if there was no Last-Modified header in a response returned from cache. *) Bugfix: "peer closed connection in SSL handshake" messages were logged at "info" level instead of "error" while connecting to backends. *) Bugfix: in the ngx_http_dav_module module in nginx/Windows. *) Bugfix: SPDY connections might be closed prematurely if caching was used. -- Maxim Dounin http://nginx.org/en/donation.html From nginx-forum at nginx.us Tue Jul 8 14:45:30 2014 From: nginx-forum at nginx.us (picanha) Date: Tue, 08 Jul 2014 10:45:30 -0400 Subject: Reverse proxy SSL subdomain Message-ID: <41b7d7a77a339dedba18634ec0eead55.NginxMailingListEnglish@forum.nginx.org> Hi, We have heterogeneous applications e and need centralizing requests on Nginx. I?m trying use reverse proxy on a subdomain and redirect requests to Java Glassfish. The problem occurs by default on listening subdomains. For example: server { listen 80; server_name subdomainA.domain.com.br; charset utf-8; passenger_enabled on; root /var/www/rails_apps/appA/public; #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ ^/(assets)/ { root /var/www/rails_apps/appA/public; gzip_static on; expires 30d; add_header Cache-Control public; } } server { listen 80; server_name domain.com.br www.domain.com.br; charset utf-8; passenger_enabled on; root /var/www/rails_apps/domain/public; #error_page 404 /404.html; # redirect server error pages to the static page /50x.html error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } location ~ ^/(assets)/ { root /var/www/rails_apps/domain/public; gzip_static on; expires 30d; add_header Cache-Control public; } } Works fine! When access htttp://subdomainA.domain.com.br access app => /var/www/rails_apps/appA/public and http://www.domain.com.br access app => /var/www/rails_apps/domain/public. But, if i'll trying use config bellow: server { ### server port and name ### listen 80; listen 443 ssl; ssl on; server_name sudomainB.domain.com.br; ### SSL log files ### access_log logs/ssl-access.log; error_log logs/ssl-error.log; ### SSL cert files ### ssl_certificate /opt/nginx/ssl/sudomainB.domain.com.br.crt; ssl_certificate_key /opt/nginx/ssl/sudomainB.domain.com.br.key; ### Add SSL specific settings here ### ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers RC4:HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; keepalive_timeout 60; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; ### We want full access to SSL via backend ### location / { ### force timeouts if one of backend is died ## proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504; ### Set headers #### proxy_set_header Accept-Encoding ""; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; ### Most PHP, Python, Rails, Java App can use this header ### #proxy_set_header X-Forwarded-Proto https; #This is better## proxy_set_header X-Forwarded-Proto $scheme; add_header Front-End-Https on; ### By default we don't want to redirect it #### proxy_redirect off; proxy_pass http://GLASSFISH_IP; } } When access https://sudomainB.domain.com.br i?m get an Timeout Connection. But, if i'm trying access https://domain.com.br, works fine and i redirected to glassfissh root app. Why HTTPS://subdomainB.domain.com.br doesn't work? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251551,251551#msg-251551 From mdounin at mdounin.ru Tue Jul 8 14:55:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 8 Jul 2014 18:55:24 +0400 Subject: Reverse proxy SSL subdomain In-Reply-To: <41b7d7a77a339dedba18634ec0eead55.NginxMailingListEnglish@forum.nginx.org> References: <41b7d7a77a339dedba18634ec0eead55.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140708145524.GX1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 10:45:30AM -0400, picanha wrote: [...] > listen 80; > listen 443 ssl; > ssl on; Note that such a configuration is wrong. Connections to port 80 (plain http) will try to use SSL with such a configuration. The "ssl" directive should be removed if you are configuring a single http/https server. See here for details: http://nginx.org/en/docs/http/configuring_https_servers.html#single_http_https_server -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jul 8 15:40:21 2014 From: nginx-forum at nginx.us (picanha) Date: Tue, 08 Jul 2014 11:40:21 -0400 Subject: Reverse proxy SSL subdomain In-Reply-To: <20140708145524.GX1849@mdounin.ru> References: <20140708145524.GX1849@mdounin.ru> Message-ID: <02f1bc05e3e3aa6b77bb239b01b1ab91.NginxMailingListEnglish@forum.nginx.org> Hello! i need multiple server. One for each subdomain and some subdomains over https... the subdomainB.domain.com.br i want use proxy glassfish over SSL. Thank you Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251551,251554#msg-251554 From nginx-forum at nginx.us Tue Jul 8 19:27:21 2014 From: nginx-forum at nginx.us (matt_l) Date: Tue, 08 Jul 2014 15:27:21 -0400 Subject: x-security-header Message-ID: Hello I am new to nginx. This is most likely a beginner question. I apologize in advance. I have a client that is sending http request with an x-security-header: a6rb35723926d2c685c2d7ud3034179828blablabla How can I configure nginx so that if the x-security-header is not present then the request is rejected? Thank you very much for your help. -matt Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251559,251559#msg-251559 From nginx-forum at nginx.us Tue Jul 8 19:42:55 2014 From: nginx-forum at nginx.us (matt_l) Date: Tue, 08 Jul 2014 15:42:55 -0400 Subject: KeepAlive and Connection closed Message-ID: Hello I am new to nginx. I will be taking the nginx training next week. In the meantime I was wondering if i was implementing the following properly. I have an nginx instance that is sitting between my server and a client. The client requires that I close the connection when I respond to it. The server requires that I keep the connection alive for performance reason. Between the server and nginx, I have set up the keepalive option. Example: upstream a-name { server XXX.XXX.XXX.XXX:12360; keepalive 1024; } Between the client and the nginx, I have set keepalive_requests to 1; Example: server { listen 12360; access_log /var/log/nginx/access-a-name-12360.log; keepalive_requests 1; location /auctions { limit_req zone=one burst=2100; limit_req_status 503; limit_conn_status 503; proxy_pass http://a-name; proxy_http_version 1.1; proxy_set_header Connection ""; } location / { return 403; } error_page 503 = /empty; location /empty { return 204; } } Am I doing the right thing? Or can I have nginx add "Connection: close\r\n" to the header when it sends the response back to the client. Thank you for your help Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251560,251560#msg-251560 From nginx-forum at nginx.us Tue Jul 8 20:05:26 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 16:05:26 -0400 Subject: i cant use limit_conn_zone Message-ID: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> I want to add some code to my nginx.conf , im using portable nginx php mysql dev stack (WT-NMP). Anyway when i add " limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s; " in http {.. , nginx is not working correctly. where do i go wrong, i need help. Thanks in advance Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251561#msg-251561 From nginx-forum at nginx.us Tue Jul 8 20:43:38 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 08 Jul 2014 16:43:38 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> References: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0eb7a14a0fa9f7ffd0c7d5539e856112.NginxMailingListEnglish@forum.nginx.org> This version from here http://nginx-win.ecsds.eu/ works with shared memory. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251565#msg-251565 From nginx-forum at nginx.us Tue Jul 8 20:58:05 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 16:58:05 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <0eb7a14a0fa9f7ffd0c7d5539e856112.NginxMailingListEnglish@forum.nginx.org> References: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> <0eb7a14a0fa9f7ffd0c7d5539e856112.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thank your for your care but i dont understand which documentation shoul i read ? i entered http://nginx-win.ecsds.eu/ works but i could not find the documentation about limiting. Can you give me more information. I need help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251566#msg-251566 From nginx-forum at nginx.us Tue Jul 8 21:11:06 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 08 Jul 2014 17:11:06 -0400 Subject: i cant use limit_conn_zone In-Reply-To: References: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> <0eb7a14a0fa9f7ffd0c7d5539e856112.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79ef678927963496805ba638c17e396c.NginxMailingListEnglish@forum.nginx.org> This has nothing to do with documentation, what you want to use does not work with the original nginx version (shared memory) the version on the website I've referred you to does work, so either forget about using limit_conn_zone or replace your nginx version with this other version. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251567#msg-251567 From nginx-forum at nginx.us Tue Jul 8 21:27:51 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 17:27:51 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <79ef678927963496805ba638c17e396c.NginxMailingListEnglish@forum.nginx.org> References: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> <0eb7a14a0fa9f7ffd0c7d5539e856112.NginxMailingListEnglish@forum.nginx.org> <79ef678927963496805ba638c17e396c.NginxMailingListEnglish@forum.nginx.org> Message-ID: First of all i should say , my english not so good... And now my website is underattack. I have to limit. I'm using WT-NMP to use nginx php and mysql so i should use some code in nginx.conf. E.g I'm using "limit_rate 700k" . But i dont understand what should i do, Please give me more information.. or example. Thank you for your patience :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251568#msg-251568 From mdounin at mdounin.ru Tue Jul 8 21:40:58 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 01:40:58 +0400 Subject: x-security-header In-Reply-To: References: Message-ID: <20140708214058.GA1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 03:27:21PM -0400, matt_l wrote: > Hello > > I am new to nginx. This is most likely a beginner question. I apologize in > advance. > > I have a client that is sending http request with an x-security-header: > a6rb35723926d2c685c2d7ud3034179828blablabla > > How can I configure nginx so that if the x-security-header is not present > then the request is rejected? Conditional processing like this can be implemented using directives of the rewrite module, "if" and "return" in particular: if ($http_x_security_header != "a6rb35723926d2c685c2d7ud3034179828blablabla") { return 403; } See here for details: http://nginx.org/r/if http://nginx.org/r/return -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jul 8 21:52:22 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 01:52:22 +0400 Subject: KeepAlive and Connection closed In-Reply-To: References: Message-ID: <20140708215222.GB1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 03:42:55PM -0400, matt_l wrote: > Hello > I am new to nginx. I will be taking the nginx training next week. In the > meantime I was wondering if i was implementing the following properly. > I have an nginx instance that is sitting between my server and a client. > The client requires that I close the connection when I respond to it. > The server requires that I keep the connection alive for performance > reason. > Between the server and nginx, I have set up the keepalive option. > Example: > upstream a-name { > server XXX.XXX.XXX.XXX:12360; > keepalive 1024; > } > Between the client and the nginx, I have set keepalive_requests to 1; > Example: > server { > listen 12360; > access_log /var/log/nginx/access-a-name-12360.log; > keepalive_requests 1; Recommended way to disable keepalive connections is to use, keepalive_timeout with the zero value: keepalive_timeout 0; See http://nginx.org/r/keepalive_timeout. Though "keepalive_requests 1" should work too. [...] > Am I doing the right thing? > Or can I have nginx add "Connection: close\r\n" to the header when it sends > the response back to the client. The "Connection: close" header will be automatically added to the response if keepalive is disabled. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jul 8 22:01:40 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 02:01:40 +0400 Subject: i cant use limit_conn_zone In-Reply-To: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> References: <0da54f51ac5ae07ef710e6f53f64e00e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140708220139.GC1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 04:05:26PM -0400, farukest wrote: > I want to add some code to my nginx.conf , im using portable nginx php mysql > dev stack (WT-NMP). Anyway when i add " limit_req_zone $binary_remote_addr > zone=one:10m rate=1r/s; " in http {.. , nginx is not working correctly. > where do i go wrong, i need help. Thanks in advance The "limit_conn_zone" creates a shared memory zone, and this doesn't work on Windows Vista and later due to ASLR, see here: http://nginx.org/en/docs/windows.html#known_issues One of possible workarounds is to start nginx with "master_process off", which is mostly development mode, though also allows to address this limitation. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jul 8 22:48:47 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 18:48:47 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <20140708220139.GC1849@mdounin.ru> References: <20140708220139.GC1849@mdounin.ru> Message-ID: <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> Thank you for your information. I understood what you are explaining about. But i heard for the first time, what is the "master_process off" mode. How can i use it with nignx (By the way i'm using portable nginx,php,mysql windows app). I really need help. Thanks is advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251572#msg-251572 From nginx-forum at nginx.us Tue Jul 8 22:51:46 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 18:51:46 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> References: <20140708220139.GC1849@mdounin.ru> <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <308f1a9f7aa44c2cfb1234c3069f0a85.NginxMailingListEnglish@forum.nginx.org> By the way i read this document. It says you should never use nginx in "master_process off" mode. http://nginx.org/en/docs/faq/daemon_master_process_off.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251573#msg-251573 From mdounin at mdounin.ru Wed Jul 9 00:52:02 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 04:52:02 +0400 Subject: i cant use limit_conn_zone In-Reply-To: <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> References: <20140708220139.GC1849@mdounin.ru> <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140709005202.GD1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 06:48:47PM -0400, farukest wrote: > Thank you for your information. I understood what you are explaining about. > But i heard for the first time, what is the "master_process > off" mode. How can i use it with nignx (By the way i'm using portable > nginx,php,mysql windows app). I really need help. Thanks is advance. http://nginx.org/r/master_process Just add the directive to nginx.conf at global level. Switching off master process means that separate master process won't be used, and the only process started will do all the work. This implies various limitations (e.g., no auto respawn of dead processes, configuration reload won't work and so on), but also simplifies debugging. (And it also happens to work around the problem with shared memory on Windows.) -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Wed Jul 9 00:54:32 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 04:54:32 +0400 Subject: i cant use limit_conn_zone In-Reply-To: <308f1a9f7aa44c2cfb1234c3069f0a85.NginxMailingListEnglish@forum.nginx.org> References: <20140708220139.GC1849@mdounin.ru> <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> <308f1a9f7aa44c2cfb1234c3069f0a85.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140709005432.GE1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 06:51:46PM -0400, farukest wrote: > By the way i read this document. It says you should never use nginx in > "master_process off" mode. > > http://nginx.org/en/docs/faq/daemon_master_process_off.html You should never run production on Windows anyway. If you are going to use it for production, not for development, it's much better idea to change the OS in the first place. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 9 01:12:19 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 21:12:19 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <20140709005432.GE1849@mdounin.ru> References: <20140709005432.GE1849@mdounin.ru> Message-ID: Thank you for your care, yes I know but i have to use windows.. I added "master_process on;" to global level. There is not any problem from here. now how does it allow me to limit connection per ip ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251577#msg-251577 From mdounin at mdounin.ru Wed Jul 9 01:22:21 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 05:22:21 +0400 Subject: i cant use limit_conn_zone In-Reply-To: References: <20140709005432.GE1849@mdounin.ru> Message-ID: <20140709012221.GF1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 09:12:19PM -0400, farukest wrote: > Thank you for your care, yes I know but i have to use windows.. I added > "master_process on;" to global level. There is not any problem from here. > now how does it allow me to limit connection per ip ? http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 9 01:56:52 2014 From: nginx-forum at nginx.us (farukest) Date: Tue, 08 Jul 2014 21:56:52 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <308f1a9f7aa44c2cfb1234c3069f0a85.NginxMailingListEnglish@forum.nginx.org> References: <20140708220139.GC1849@mdounin.ru> <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> <308f1a9f7aa44c2cfb1234c3069f0a85.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thank you i did what you said and it is working good. does this process slow down the website ? If it does, what should i do for as an alternative speed. Any idea ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251590#msg-251590 From mdounin at mdounin.ru Wed Jul 9 02:04:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 06:04:04 +0400 Subject: i cant use limit_conn_zone In-Reply-To: References: <20140708220139.GC1849@mdounin.ru> <5e88057c731603470d42be52be69dfa5.NginxMailingListEnglish@forum.nginx.org> <308f1a9f7aa44c2cfb1234c3069f0a85.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140709020404.GK1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 09:56:52PM -0400, farukest wrote: > Thank you i did what you said and it is working good. does this process slow > down the website ? If it does, what should i do for as an alternative speed. > Any idea ? You may want to actually read the link provided. It explains how to configure limit_conn, what it does and what will be the expected result. Quoting most relevant part of the limit_conn directive docs (http://nginx.org/r/limit_conn): : Sets the shared memory zone and the maximum allowed number of : connections for a given key value. When this limit is exceeded, : the server will return the 503 (Service Temporarily Unavailable) : error in reply to a request. For example, the directives : : limit_conn_zone $binary_remote_addr zone=addr:10m; : : server { : location /download/ { : limit_conn addr 1; : } : : allow only one connection per an IP address at a time. -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Wed Jul 9 02:41:20 2014 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 8 Jul 2014 22:41:20 -0400 Subject: nginx-1.7.3 In-Reply-To: <20140708134527.GT1849@mdounin.ru> References: <20140708134527.GT1849@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.7.3 for Windows http://goo.gl/2J5HAA (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available via: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Jul 8, 2014 at 9:45 AM, Maxim Dounin wrote: > Changes with nginx 1.7.3 08 Jul > 2014 > > *) Feature: weak entity tags are now preserved on response > modifications, and strong ones are changed to weak. > > *) Feature: cache revalidation now uses If-None-Match header if > possible. > > *) Feature: the "ssl_password_file" directive. > > *) Bugfix: the If-None-Match request header line was ignored if there > was no Last-Modified header in a response returned from cache. > > *) Bugfix: "peer closed connection in SSL handshake" messages were > logged at "info" level instead of "error" while connecting to > backends. > > *) Bugfix: in the ngx_http_dav_module module in nginx/Windows. > > *) Bugfix: SPDY connections might be closed prematurely if caching was > used. > > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 9 02:51:18 2014 From: nginx-forum at nginx.us (willy7841) Date: Tue, 08 Jul 2014 22:51:18 -0400 Subject: Cross-compiling Nginx for SAMA5D3 Xplained board Message-ID: Hello, I would like to run Nginx on my SAMA5D3 Xplained board. I have a newbie questions: I try to cross-compiling Nginx on my PC running Ubuntu with SAMA5D3 Xplained cross-compiling, then I modify the environmental parameters of Ubuntu for CC, AR, LD, RANLIB and STRIP: ===================== root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# export CC=/home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-gcc root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# export AR=/home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-ar root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# export LD=/home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-ld root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# export RANLIB=/home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-ranlib root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# export STRIP=/home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-strip ===================== But it fails from the start: ===================== root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# ./configure --prefix=/home/u12/buildroot-at91/output/target/var/www/nginx-arm checking for OS + Linux 3.5.0-51-generic x86_64 checking for C compiler ... found but is not working ./configure: error: C compiler /home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-gcc is not found root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# ll /home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-gcc lrwxrwxrwx 1 u12 u12 21 Jun 3 13:35 /home/u12/buildroot-at91/output/host/usr/bin/arm-linux-gnueabihf-gcc -> ext-toolchain-wrapper* root at ubuntu:/home/u12/Desktop/JJ/nginx-0.8.30# ===================== How should I set things up so that I can try cross-compiling Nginx? Thanks for any help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251595,251595#msg-251595 From mdounin at mdounin.ru Wed Jul 9 03:03:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 07:03:27 +0400 Subject: Cross-compiling Nginx for SAMA5D3 Xplained board In-Reply-To: References: Message-ID: <20140709030327.GM1849@mdounin.ru> Hello! On Tue, Jul 08, 2014 at 10:51:18PM -0400, willy7841 wrote: > Hello, > > I would like to run Nginx on my SAMA5D3 Xplained board. > > I have a newbie questions: > > I try to cross-compiling Nginx on my PC running Ubuntu with SAMA5D3 Xplained > cross-compiling, then I modify the environmental parameters of Ubuntu for > CC, AR, LD, RANLIB and STRIP: [...] > How should I set things up so that I can try cross-compiling Nginx? Cross-compilation is not something nginx supports. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 9 03:43:18 2014 From: nginx-forum at nginx.us (willy7841) Date: Tue, 08 Jul 2014 23:43:18 -0400 Subject: Cross-compiling Nginx for SAMA5D3 Xplained board In-Reply-To: <20140709030327.GM1849@mdounin.ru> References: <20140709030327.GM1849@mdounin.ru> Message-ID: <59a5b190320d3fca4cc6713be73581a6.NginxMailingListEnglish@forum.nginx.org> Hello Maxim Dounin, Thank you for your explanation. If I would like to run Nginx on embedded board, how could I do? Thanks for your help. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251595,251598#msg-251598 From vl at nginx.com Wed Jul 9 04:47:41 2014 From: vl at nginx.com (Homutov Vladimir) Date: Wed, 9 Jul 2014 08:47:41 +0400 Subject: Cross-compiling Nginx for SAMA5D3 Xplained board In-Reply-To: <59a5b190320d3fca4cc6713be73581a6.NginxMailingListEnglish@forum.nginx.org> References: <20140709030327.GM1849@mdounin.ru> <59a5b190320d3fca4cc6713be73581a6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140709044740.GA378@vl> On Tue, Jul 08, 2014 at 11:43:18PM -0400, willy7841 wrote: > Hello Maxim Dounin, > > Thank you for your explanation. > > If I would like to run Nginx on embedded board, how could I do? > > Thanks for your help. > In short, you have to rework configure tests that currently depend on host system to build and execute test binaries. You are not the first person though who needs to cross-compile and may want to search mailing list/google for similar attempts, for example: http://mailman.nginx.org/pipermail/nginx-devel/2011-December/001666.html From nginx-forum at nginx.us Wed Jul 9 05:35:00 2014 From: nginx-forum at nginx.us (willy7841) Date: Wed, 09 Jul 2014 01:35:00 -0400 Subject: Cross-compiling Nginx for SAMA5D3 Xplained board In-Reply-To: <20140709044740.GA378@vl> References: <20140709044740.GA378@vl> Message-ID: <9f263745a02075070dba5569cdc594ab.NginxMailingListEnglish@forum.nginx.org> Hello Homutov Vladimir, Thank you for your explanation. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251595,251601#msg-251601 From nginx-forum at nginx.us Wed Jul 9 10:19:44 2014 From: nginx-forum at nginx.us (farukest) Date: Wed, 09 Jul 2014 06:19:44 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <20140709020404.GK1849@mdounin.ru> References: <20140709020404.GK1849@mdounin.ru> Message-ID: <8871a75eca0f9bfcb39f76dfbffc7e3f.NginxMailingListEnglish@forum.nginx.org> Thank you for your care. It works like a charm. But sometimes page is not loading correctly. what is the logic of "zone=addr:10m;" and i want to block website entering (fast entering) and last question what location should i define for "wp-content/uploads/somevideo.mp4" per ip. Thanks in advance your help is really important for me Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251605#msg-251605 From nginx-forum at nginx.us Wed Jul 9 11:27:43 2014 From: nginx-forum at nginx.us (farukest) Date: Wed, 09 Jul 2014 07:27:43 -0400 Subject: i cant use limit_conn_zone In-Reply-To: <20140709020404.GK1849@mdounin.ru> References: <20140709020404.GK1849@mdounin.ru> Message-ID: I realized now, master process off is not enough to playing HD movies. It gives error after a while. But i want to thank you because making block for a while, using master process and limit_conn destroyed the attacking. Now i turn my nginx.conf to master_process on. Everything is alright. Thank you again. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251561,251606#msg-251606 From mdounin at mdounin.ru Wed Jul 9 12:11:34 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jul 2014 16:11:34 +0400 Subject: Cross-compiling Nginx for SAMA5D3 Xplained board In-Reply-To: <20140709044740.GA378@vl> References: <20140709030327.GM1849@mdounin.ru> <59a5b190320d3fca4cc6713be73581a6.NginxMailingListEnglish@forum.nginx.org> <20140709044740.GA378@vl> Message-ID: <20140709121134.GO1849@mdounin.ru> Hello! On Wed, Jul 09, 2014 at 08:47:41AM +0400, Homutov Vladimir wrote: > On Tue, Jul 08, 2014 at 11:43:18PM -0400, willy7841 wrote: > > Hello Maxim Dounin, > > > > Thank you for your explanation. > > > > If I would like to run Nginx on embedded board, how could I do? > > > > Thanks for your help. > > > > In short, you have to rework configure tests that currently depend > on host system to build and execute test binaries. > > You are not the first person though who needs to cross-compile and > may want to search mailing list/google for similar attempts, for example: > > http://mailman.nginx.org/pipermail/nginx-devel/2011-December/001666.html Much easier approach is to just compile on the board itself and/or in the emulated environment. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jul 10 05:43:17 2014 From: nginx-forum at nginx.us (ashishadhav) Date: Thu, 10 Jul 2014 01:43:17 -0400 Subject: nginx and unix socket set retry time. Message-ID: <4c7c4e82ab68bbc83f77b2734f308fe5.NginxMailingListEnglish@forum.nginx.org> Hi , Is there a way to retry connection to an unix socket ( on which my fast cgi app is running ) for some time , instead of immediately responding 502 bad gateway and exiting ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251623,251623#msg-251623 From wubingzheng at 163.com Thu Jul 10 08:18:33 2014 From: wubingzheng at 163.com (WuBingzheng) Date: Thu, 10 Jul 2014 01:18:33 -0700 (PDT) Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: References: Message-ID: <1404980313518-7590693.post@n2.nabble.com> Hello, >From http://tools.ietf.org/html/rfc5077#section-3.4, I think Session Tickets and Session ID do not work for one connection at the same time. If the client supports Tickets, then Session ID (or the session cache) will not work. Am I right? In my test, the 2 callbacks ngx_ssl_new_session() and ngx_ssl_get_cached_session() are not called if ticket is used. So if we assume that most browsers support Tickets now, the session cache does not work at most time, why does the ngx_slab_alloc() fails in your post? If I am right, should I just disable session cache, and set tickets life time big enough? Maybe SSL_CTX_set_timeout() should be moved to the beginning of ngx_ssl_session_cache() then. Thanks Wu -- View this message in context: http://nginx.2469901.n2.nabble.com/SSL-session-cache-lifetime-vs-session-ticket-lifetime-tp7588963p7590693.html Sent from the nginx mailing list archive at Nabble.com. From nginx-forum at nginx.us Thu Jul 10 16:18:24 2014 From: nginx-forum at nginx.us (T.crowder) Date: Thu, 10 Jul 2014 12:18:24 -0400 Subject: SSI and proxy_pass Message-ID: <1a135ee26839fdc7bb7375c130137661.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm trying out nginx. I would like to use it to perform the following: 1. Retrieve a page from a server1 which includes some SSI commands 2. Process the SSI commands, eventually including content from server2 3. Return the resultant page I've got SSI working when using a local file, but not when using the page from a server1 using proxy_pass. Here's my config I'm using to try to achieve the above. events { worker_connections 1024; } http { server { listen 80; server_name localhost; location /hello-world.html { ssi on; proxy_pass http://tom.office.bla.co.uk:8080/hello-world/; } } } For testing purposes, I'm using a simple SSI command, as shown in the output my browser actually ends up with, which is identical to the content on server1: Do I need to use something other than proxy_pass, or is it just not possible? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251630,251630#msg-251630 From vbart at nginx.com Thu Jul 10 18:33:52 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 10 Jul 2014 22:33:52 +0400 Subject: SSI and proxy_pass In-Reply-To: <1a135ee26839fdc7bb7375c130137661.NginxMailingListEnglish@forum.nginx.org> References: <1a135ee26839fdc7bb7375c130137661.NginxMailingListEnglish@forum.nginx.org> Message-ID: <19249653.E0140CBlEo@vbart-laptop> On Thursday 10 July 2014 12:18:24 T.crowder wrote: > Hi all, > > I'm trying out nginx. I would like to use it to perform the following: > > 1. Retrieve a page from a server1 which includes some SSI commands > 2. Process the SSI commands, eventually including content from server2 > 3. Return the resultant page > > I've got SSI working when using a local file, but not when using the page > from a server1 using proxy_pass. > > Here's my config I'm using to try to achieve the above. > > events { > worker_connections 1024; > } > http { > server { > listen 80; > server_name localhost; > > location /hello-world.html { > ssi on; > proxy_pass http://tom.office.bla.co.uk:8080/hello-world/; > } > } > } > For testing purposes, I'm using a simple SSI command, as shown in the output > my browser actually ends up with, which is identical to the content on > server1: > > > > > > > > > Do I need to use something other than proxy_pass, or is it just not > possible? Thanks! [..] You need to look carefully what actually your "server1" returns to nginx. It can be a response with an inappropriate content type (see the ssi_types directive). Or, for example, it uses compression. http://nginx.org/r/ssi_types wbr, Valentin V. Bartenev From piotr at cloudflare.com Thu Jul 10 22:09:02 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Thu, 10 Jul 2014 15:09:02 -0700 Subject: SSL session cache lifetime vs session ticket lifetime In-Reply-To: <1404980313518-7590693.post@n2.nabble.com> References: <1404980313518-7590693.post@n2.nabble.com> Message-ID: Hey, > Maybe SSL_CTX_set_timeout() should be moved to the beginning of > ngx_ssl_session_cache() then. http://hg.nginx.org/nginx/rev/767aa37f12de Best regards, Piotr Sikora From free4me at gmx.ch Fri Jul 11 08:52:09 2014 From: free4me at gmx.ch (free4me at gmx.ch) Date: Fri, 11 Jul 2014 10:52:09 +0200 Subject: Centos7/Rhel7 Nginx Repo Message-ID: An HTML attachment was scrubbed... URL: From rvrv7575 at yahoo.com Fri Jul 11 11:50:37 2014 From: rvrv7575 at yahoo.com (Rv Rv) Date: Fri, 11 Jul 2014 19:50:37 +0800 Subject: Reason for storing duplicate copies of response header value e.g. content_encoding in headers_out Message-ID: <1405079437.18694.YahooMailNeo@web193506.mail.sg3.yahoo.com> Nginx stores the response headers in the headers ngx_http_headers_out_t ngx_list_t??????????????????????? headers ?and also for certain headers , in a corresponding variable in headers_out e.g. ??? ngx_table_elt_t????????????????? *content_encoding; The body filter e.g. gunzip? operate only on content_encoding variable. So after the gunzip filter executes, the content_encoding value in the headers variable will be gzip (because the response was received as compressed and the content encoding was gzip) but the value of content_encoding variable will be blank because the gunzip filter would have decompressed the content and therefore the content encoding is no longer gzip. Are all the body filters than expected to look at the variable within the headers_out structure? ? If so what is the use case of maintaining (possibly) different values for the same headers at different places Thanks for any answers??? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 11 12:20:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 11 Jul 2014 16:20:25 +0400 Subject: Reason for storing duplicate copies of response header value e.g. content_encoding in headers_out In-Reply-To: <1405079437.18694.YahooMailNeo@web193506.mail.sg3.yahoo.com> References: <1405079437.18694.YahooMailNeo@web193506.mail.sg3.yahoo.com> Message-ID: <20140711122025.GI1849@mdounin.ru> Hello! On Fri, Jul 11, 2014 at 07:50:37PM +0800, Rv Rv wrote: > Nginx stores the response headers in the headers ngx_http_headers_out_t > > ngx_list_t??????????????????????? headers > ?and also for certain headers , in a corresponding variable in headers_out > e.g. ??? ngx_table_elt_t????????????????? *content_encoding; > > The body filter e.g. gunzip? operate only on content_encoding > variable. > So after the gunzip filter executes, the content_encoding value > in the headers variable will be gzip (because the response was > received as compressed and the content encoding was gzip) but > the value of content_encoding variable will be blank because the > gunzip filter would have decompressed the content and therefore > the content encoding is no longer gzip. > > Are all the body filters than expected to look at the variable > within the headers_out structure? ? If so what is the use case > of maintaining (possibly) different values for the same headers > at different places In the gunzip filter, you may notice the following: r->headers_out.content_encoding->hash = 0; r->headers_out.content_encoding = NULL; This does two things: 1. Clears the "hash" value of the header in the headers list. For response headers this means that the header is to be ignored. See ngx_http_header_filter() to find out how it's handled. 2. Clears the r->headers_out.content_encoding pointer, to make other filters know that there is no Content-Encoding headers. This is basically identical to what various ngx_http_clear_xxx() macros do, see src/http/ngx_http_core_module.h. -- Maxim Dounin http://nginx.org/ From sb at nginx.com Fri Jul 11 13:45:32 2014 From: sb at nginx.com (Sergey Budnevitch) Date: Fri, 11 Jul 2014 17:45:32 +0400 Subject: Centos7/Rhel7 Nginx Repo In-Reply-To: References: Message-ID: On 11 Jul 2014, at 12:52, free4me at gmx.ch wrote: > Hi List > > I want to use Nginx under the newly released Centos 7. I've tried to compile it on Centos 7 but it failed. But i didnt look into it further so far because i would like to use anyway the version provided by the Nginx.org yum Repo. Since there is no repo available for Centos7/Rhel7; what's the estimation about the availability of a Centos7 repo? Weeks or more like months? > We built package for mainline release 1.7.3. To install it please create file named /etc/yum.repos.d/nginx.repo with the following contents: [nginx] name=nginx repo baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ gpgcheck=0 enabled=1 and run yum install nginx From reallfqq-nginx at yahoo.fr Fri Jul 11 16:11:35 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 11 Jul 2014 18:11:35 +0200 Subject: Centos7/Rhel7 Nginx Repo In-Reply-To: References: Message-ID: Why gpgcheck should be deactivated? AFAIK, there is a GPG key available for APT... Shouldn't it be the same used for yum? --- *B. R.* On Fri, Jul 11, 2014 at 3:45 PM, Sergey Budnevitch wrote: > > On 11 Jul 2014, at 12:52, free4me at gmx.ch wrote: > > > Hi List > > > > I want to use Nginx under the newly released Centos 7. I've tried to > compile it on Centos 7 but it failed. But i didnt look into it further so > far because i would like to use anyway the version provided by the > Nginx.org yum Repo. Since there is no repo available for Centos7/Rhel7; > what's the estimation about the availability of a Centos7 repo? Weeks or > more like months? > > > > We built package for mainline release 1.7.3. To install it please create > file named /etc/yum.repos.d/nginx.repo with the following contents: > > [nginx] > name=nginx repo > baseurl=http://nginx.org/packages/mainline/centos/7/$basearch/ > gpgcheck=0 > enabled=1 > > and run > > yum install nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Fri Jul 11 19:36:19 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 11 Jul 2014 15:36:19 -0400 Subject: Strange try_files behaviour Message-ID: Simple php config (nginx 1.7.4 development); server { [...] location ~ \.php$ { try_files $uri $uri/ =404; index index.html index.htm index.php; fastcgi_ignore_client_abort on; fastcgi_pass myLoadBalancer; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location / { try_files $uri $uri/ =404; index index.html index.htm; } } None existing file: "GET /viewforum.pp?f=3 HTTP/1.1" 404 180 "-" a proper 404 from nginx. However a different approach also with a none existing file: "GET /viewforu.php?f=3 HTTP/1.1" 404 56 "-" returns a 404 from the backend and a 'no input file specified' Neither files exist yet the second test is going past try_files. Anyone any idea if this is a config issue or a bug? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251650#msg-251650 From mdounin at mdounin.ru Fri Jul 11 20:11:39 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Jul 2014 00:11:39 +0400 Subject: Strange try_files behaviour In-Reply-To: References: Message-ID: <20140711201139.GP1849@mdounin.ru> Hello! On Fri, Jul 11, 2014 at 03:36:19PM -0400, itpp2012 wrote: > Simple php config (nginx 1.7.4 development); > > server { > [...] > location ~ \.php$ { > try_files $uri $uri/ =404; > index index.html index.htm index.php; > fastcgi_ignore_client_abort on; > fastcgi_pass myLoadBalancer; > fastcgi_index index.php; > fastcgi_param SCRIPT_FILENAME > $document_root$fastcgi_script_name; > include fastcgi_params; > } > location / { > try_files $uri $uri/ =404; > index index.html index.htm; > } > } > > None existing file: > "GET /viewforum.pp?f=3 HTTP/1.1" 404 180 "-" > a proper 404 from nginx. > > However a different approach also with a none existing file: > "GET /viewforu.php?f=3 HTTP/1.1" 404 56 "-" > returns a 404 from the backend and a 'no input file specified' > > Neither files exist yet the second test is going past try_files. > > Anyone any idea if this is a config issue or a bug? I would suggest it's a config issue. Note though that the config snippet provided isn't enough to conclude anything. If in doubt, try debug log. See http://nginx.org/en/docs/debugging_log.html for details. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Fri Jul 11 21:42:41 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 11 Jul 2014 17:42:41 -0400 Subject: Strange try_files behaviour In-Reply-To: <20140711201139.GP1849@mdounin.ru> References: <20140711201139.GP1849@mdounin.ru> Message-ID: Ok, debug session here http://pastebin.com/DQ6WBYXU I see one try_files phase, maybe a script is processed differently then a static file. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251652#msg-251652 From vbart at nginx.com Fri Jul 11 22:06:49 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 12 Jul 2014 02:06:49 +0400 Subject: Strange try_files behaviour In-Reply-To: References: <20140711201139.GP1849@mdounin.ru> Message-ID: <2240345.Gq6mm2nolO@vbart-laptop> On Friday 11 July 2014 17:42:41 itpp2012 wrote: > Ok, debug session here http://pastebin.com/DQ6WBYXU [..] It looks like one case of http://wiki.nginx.org/IfIsEvil wbr, Valentin V. Bartenev From nginx-forum at nginx.us Fri Jul 11 22:21:18 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 11 Jul 2014 18:21:18 -0400 Subject: Strange try_files behaviour In-Reply-To: <2240345.Gq6mm2nolO@vbart-laptop> References: <2240345.Gq6mm2nolO@vbart-laptop> Message-ID: Valentin V. Bartenev Wrote: ------------------------------------------------------- > On Friday 11 July 2014 17:42:41 itpp2012 wrote: > > Ok, debug session here http://pastebin.com/DQ6WBYXU > [..] > > It looks like one case of http://wiki.nginx.org/IfIsEvil Maybe but should an if bypass try_files ? in a none php environment this works as it should, lots of if's and other stuff where try_files does what it suppose to do, so it should for a php location. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251655#msg-251655 From nginx-forum at nginx.us Sat Jul 12 10:23:41 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 12 Jul 2014 06:23:41 -0400 Subject: Strange try_files behaviour In-Reply-To: References: <2240345.Gq6mm2nolO@vbart-laptop> Message-ID: <0d24b03b975f830e5bbbd36733340b7e.NginxMailingListEnglish@forum.nginx.org> Hmm, more debugging, this config returns a 404 from the backend (which it shouldn't): try_files $uri $uri/ =404; set $maintmode S; if ($remote_addr ~ "^(192.168.*.*)$") { set $maintmode L; } if (-f $document_root/maintenance_mode.html) { set $maintmode "${maintmode}M"; } if ($maintmode = SM) { return 503; } This config returns a 404 from nginx, like it should: try_files $uri $uri/ =404; set $maintmode S; # if ($remote_addr ~ "^(192.168.*.*)$") { set $maintmode L; } if (-f $document_root/maintenance_mode.html) { set $maintmode "${maintmode}M"; } if ($maintmode = SM) { return 503; } So yes it is an IF issue but to my opinion this should not happen. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251668#msg-251668 From jags.gediya at gmail.com Sat Jul 12 13:00:30 2014 From: jags.gediya at gmail.com (jags gediya) Date: Sat, 12 Jul 2014 18:30:30 +0530 Subject: nginx for ARM Message-ID: I want to use nginx web server for arm based developement board. It's end application is board will work as IOTG for home automation. For this purpose, I want to cross compile nginx for ARM and port it on linux runnig on my board. Is it possible to cross compile nginx for ARM? From vbart at nginx.com Sat Jul 12 13:06:07 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Sat, 12 Jul 2014 17:06:07 +0400 Subject: nginx for ARM In-Reply-To: References: Message-ID: <4370566.exVaeLpLHt@vbart-laptop> On Saturday 12 July 2014 18:30:30 jags gediya wrote: > I want to use nginx web server for arm based developement board. It's > end application is board will work as IOTG for home automation. For > this purpose, I want to cross compile nginx for ARM and port it on > linux runnig on my board. Is it possible to cross compile nginx for > ARM? > http://mailman.nginx.org/pipermail/nginx/2014-July/044385.html You should compile on the board itself or in the emulated environment. And please, do not cross-post, thanks. wbr, Valentin V. Bartenev From djczaski at gmail.com Sat Jul 12 14:08:43 2014 From: djczaski at gmail.com (djczaski at gmail.com) Date: Sat, 12 Jul 2014 10:08:43 -0400 Subject: nginx for ARM In-Reply-To: <4370566.exVaeLpLHt@vbart-laptop> References: <4370566.exVaeLpLHt@vbart-laptop> Message-ID: <6E79CB16-89AB-40BD-A89F-4B4C4CD80B16@gmail.com> > On Jul 12, 2014, at 9:06 AM, "Valentin V. Bartenev" wrote: > >> On Saturday 12 July 2014 18:30:30 jags gediya wrote: >> I want to use nginx web server for arm based developement board. It's >> end application is board will work as IOTG for home automation. For >> this purpose, I want to cross compile nginx for ARM and port it on >> linux runnig on my board. Is it possible to cross compile nginx for >> ARM? > > http://mailman.nginx.org/pipermail/nginx/2014-July/044385.html > You should compile on the board itself or in the emulated environment. > > And please, do not cross-post, thanks. > > No support for cross compiling is such a shame. From agentzh at gmail.com Sun Jul 13 03:28:51 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Sat, 12 Jul 2014 20:28:51 -0700 Subject: [ANN] OpenResty 1.7.2.1 released Message-ID: Hi folks! I am happy to announce the new formal release, 1.7.2.1, of the OpenResty bundle: http://openresty.org/#Download Special thanks go to all our contributors for making this happen! Below is the complete change log for this release, as compared to the last formal release, 1.7.0.1: * upgraded the Nginx core to 1.7.2. * see the changes here: * upgraded LuaJIT to v2.1-20140707: https://github.com/openresty/luajit2/tags * imported Mike Pall's latest bug fixes and features: * feature: compile debug.getmetatable(). Thanks to Karel Tuma. * bugfix: Fix ABC elimination (for negative table indexes, for example). * bugfix: FFI: Fix compilation of reference field access. * bugfix: FFI: fixed frame traversal for backtraces with FFI callbacks. * bugfix: x86: lj_math_random_step() clobbers XMM regs on OSX Clang. * bugfix: fixed debug info for main chunk of stripped bytecode. * upgraded the lua-resty-core library to 0.0.8. * feature: resty.core.regex: use "resty.lrucache" for the compiled regex cache for ngx.re.find and ngx.re.match in order to prevent pathalogical performance when the number of regexes has exceeded lua_regex_cache_max_entries. * optimize: resty.core.regex: removed one obsolete assertion that was for a LuaJIT bug (already fixed). * upgraded the lua-resty-dns library to 0.12. * feature: added support for the SRV resource record type (see RFC 2782). thanks Torbj?rn Norinder for the patch. * upgraded the lua-resty-upstream-healthcheck library to 0.02. * bugfix: for bad status lines, we could throw out the "bad argument #2 to 'sub'" error, reported by George Bashi. * doc: avoided using the "\r\n" sequence in Lua long brackets because Lua would squeeze it to "\n", unfortunately. thanks George Bashi for the report. * doc: made it clear that multiple "upstream {}" blocks' checkers can share a single shm zone. thanks Robert Paprocki for asking. * doc: now we need to turn off lua_socket_log_errors explicitly in code examples. * upgraded the lua-resty-lrucache library to 0.02. * feature: added an alternative implementation using FFI-based hash-table in the form of the new class "resty.lrucache.pureffi", which is much faster than the default "resty.lrucache" class when there are a lot of key variations. thanks Shuxin Yang for the patch. * upgraded the ngx_lua module to 0.9.10. * feature: stream-typed cosockets are now full-duplex: a reader "light thread" and a writer "light thread" can operate on the same cosocket simultaneously. thanks shun zhang and aviramc for the original patches. * feature: added new API function ngx.thread.kill() for killing a user "light thread". thanks aviramc for the original patch. * bugfix: the "coroutine" module table introduced by "require('coroutine')" was not working in our Lua context. thanks Paul K and Pierre-Yves G?rardy for the report. * bugfix: fixed the initial size of the ngx.worker table and the misleading comment due to a copy&paste mistake. thanks Suraj Jaiswal for the report. * bugfix: the "coctx cleanup" handler might not be called before being overidden by other operations. this could happen when failing to yield in an error handler (for xpcall). * bugfix: fixed an incorrect error message. thanks doujiang for the patch. * bugfix: fixed a compilation error regression when using the Microsoft Visual C/C++ compiler. thanks itpp16 for the patch. * bugfix: we should use "c->buffered & NGX_HTTP_LOWLEVEL_BUFFERED" instead of "c->buffered" for testing if the downstream connection is busy writing. * bugfix: we did not handle an out-of-memory case in ngx.req.set_body_data(). * bugfix: ngx_http_lua_chain_get_free_buf(): avoided returning zero-sized memory bufs. * bugfix: body_filter_by_lua*: we might incorrectly pass zero-size bufs (in the form of "special sync bufs") at the beginning of a chain, which could get stuck in the buffer of "ngx_http_writer_filter_module" (or in other words, being "busy") while could still get recycled in the content handler (like content_by_lua), leading to buffer corruptions. thanks westhood for the report and patch. * bugfix: we did not clear all the fields in the "ngx_buf_t" C struct when recycling chain link buffers. * bugfix: the *_by_lua_file directives failed to load .lua files of exactly the size "n*LUAL_BUFFERSIZE" bytes with the error "'end' expected (to close 'function' at line 1) near ''". thanks kworr for the report. * change: now we always iterate through all the user light threads to ensure all threads are de-anchored even when the "uthreads" counter gets out of sync. also added an assertion on the "uthreads" counter. * change: now we turn off our C-land assertions by default unless the user explicitly specifies the C compiler option "-DNGX_LUA_USE_ASSERT". * change: throw out the "no memory" Lua error consistently (instead of "out of memory") when failing to allocate on the nginx side. * change: we now still call "ngx_pfree()" in our own "pcre_free" hook. * doc: documented the "NGX_LUA_USE_ASSERT" and "NGX_LUA_ABORT_AT_PANIC" C macros. * doc: added performance notes to the sections for the ngx.var and ngx.ctx API. * doc: documented the types of Lua values that can be passed to the ngx.timer callback functions. * upgraded the ngx_form_input module to 0.09. * bugfix: fixed warnings from the Microsoft Visual C/C++ compiler. thanks itpp16 for the report. * upgraded the ngx_echo module to 0.54. * bugfix: the "unknown option for echo_subrequest_async" error was thrown when Nginx variables were used in both the "method" argument and URI argument of the echo_subrequest directive (and etc). thanks Utkarsh Upadhyay for the report. * bugfix: fixed a misleading error message. * upgraded the ngx_srcache module to 0.28. * feature: log an error message when srcache_store subrequest has an error or returns a bad HTTP status code. thanks Yann Coleu for the report. * doc: typo fix from javasboy. * upgraded the ngx_memc module to 0.15. * bugfix: we did not log error messages for invalid values of $memc_flags, $memc_exptime, and $memc_value, leading to hard-to-debug HTTP 400 status errors. thanks Yann Coleu for the report. * bugfix: "./configure --without-lua_resty_dns" did not work as declared. thanks Vitaly for the report. * bugfix: use "cc" as the default C compiler for LuaJIT and Lua C libraries because modern FreeBSD 10 has no gcc by default and its clang is already featureful enough to compile everything. thanks Stefan Parvu for the suggestion. * change: "./configure --with-debug" now also passes the extra C compiler options "-DNGX_LUA_USE_ASSERT -DNGX_LUA_ABORT_AT_PANIC" to the the ngx_lua module build. The HTML version of the change log with lots of helpful hyper-links can be browsed here: http://openresty.org/#ChangeLog1007002 OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: http://openresty.org/ We have run extensive testing on our Amazon EC2 test cluster and ensured that all the components (including the Nginx core) play well together. The latest test report can always be found here: http://qa.openresty.org Have fun! -agentzh From martin.grotzke at googlemail.com Sun Jul 13 12:49:18 2014 From: martin.grotzke at googlemail.com (Martin Grotzke) Date: Sun, 13 Jul 2014 14:49:18 +0200 Subject: Is it possible to send html HEAD early (chunked)? Message-ID: Hi all, inspired by the bigpipe pattern I'm wondering if it's possible to send the full html head so that the browser can start downloading CSS and javascript files. An idea would be that the proxied backend uses a chunked encoding and sends the html head as first chunk. The body would be sent as a separate chunk as soon as all data is collected. Not sure if this is relevant: In our particular case we're using ssi in the body to assemble the whole page, and some of the includes might take some time to be loaded. The html head contains an include as well, but this should always be loaded from the cache or should be served really fast by the backend. What do you think about this? Has anybody tried this already? Cheers, Martin -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 13 13:22:39 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 13 Jul 2014 09:22:39 -0400 Subject: Nginx + LibreSSL - a first test Message-ID: https://www.mare-system.de/blog/page/1405201517/ # Summary It works. While it is not recommended to substitude OpenSSL with LibreSSL in this early stage, i wanted to test if it is possible. And it is. There are no functional or performance-issues, as far as i can test, and building nginx + libressl is easy, once you figured out how to do it. The advantages of using LibreSSL in the long run, from my point of view: - cleaner code - less bugs - more people involved p.s.: please forgive those typos and bad english; i wanted to get this out bevore the final final today, QA has to wait :D regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251718,251718#msg-251718 From nginx-forum at nginx.us Sun Jul 13 13:40:35 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 13 Jul 2014 09:40:35 -0400 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: sounds more like a custom solution that might be achieved using lua + nginx; from what i understand you have a "static" part that should get send early/from cache and a "dynamic" part that must wait for the backend? the only solution i could think of in such an asynchronous delivery is using nginx + lua, or maybe varnish (iirc you yould mark parts of a page cacheable, but dont know if you can deliver asynchronously though) regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251717,251719#msg-251719 From nginx-forum at nginx.us Sun Jul 13 14:16:57 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 13 Jul 2014 10:16:57 -0400 Subject: [ANN] Windows nginx 1.7.4.1 RedKnight Message-ID: <3e4cf61f4ef70733d366ba0ee6faab84.NginxMailingListEnglish@forum.nginx.org> 15:54 13-7-2014 nginx 1.7.4.1 RedKnight Based on nginx 1.7.4 (11-7-2014, last changeset 5767:abd460ece11e) with; + lua-nginx-module v0.9.11 (upgraded 12-7-2014) + echo-nginx-module v0.54 (upgraded 3-7-2014) + form-input-nginx-module v0.09 (upgraded 3-7-2014) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' * This is the last of the RedKnight series, watch out for the new release name Builds can be found here: http://nginx-win.ecsds.eu/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251720,251720#msg-251720 From martin.grotzke at googlemail.com Sun Jul 13 15:35:47 2014 From: martin.grotzke at googlemail.com (Martin Grotzke) Date: Sun, 13 Jul 2014 17:35:47 +0200 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: Am 13.07.2014 15:40 schrieb "mex" : > > sounds more like a custom solution that might be achieved using lua + nginx; Ok, I haven't done anything with nginx+lua so far, need to check out what can be done with lua. Can you give some direction how lua can be helpful here? > from what i understand you have a "static" part that should get send > early/from > cache and a "dynamic" part that must wait for the backend? Exactly. Cheers, Martin > the only solution i could think of in such an asynchronous delivery > is using nginx + lua, or maybe varnish (iirc you yould mark parts of a > page cacheable, but dont know if you can deliver asynchronously though) > > > > regards, > > > mex > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251717,251719#msg-251719 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 13 16:37:43 2014 From: nginx-forum at nginx.us (mex) Date: Sun, 13 Jul 2014 12:37:43 -0400 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: > Ok, I haven't done anything with nginx+lua so far, need to check out > what > can be done with lua. Can you give some direction how lua can be > helpful here? oh ... lua might be used to manipulate every single phase of a request coming to and processed by nginx; so a swiss army knife super-extended version :) some stuff to skim through to get an impression: - https://github.com/openresty/lua-nginx-module#typical-uses - http://wiki.nginx.org/HttpLuaModule in your case i'd say the cleanest way would be a reengineering of your application; the other way would imply a full regex on every request coming back from your app-servers to filter out those stuff that already has been send. the problem: appservers like tomcat/jboss/rails a.s.o. usually send full html-pages; if you find a way to just send the itself, the rest like sending html-headers early from cache seems easy: location /blah { content_by_lua ' ngx.say(html_header) local res = ngx.location.capture("/get_stuff_from_backend") if res.status == 200 then ngx.say(res.body) end ngx.say(html_footer) '; } do you refer to something similar to this? https://github.com/bigpipe/bigpipe > > > from what i understand you have a "static" part that should get send > > early/from > > cache and a "dynamic" part that must wait for the backend? > > Exactly. > > Cheers, > Martin > > > the only solution i could think of in such an asynchronous delivery > > is using nginx + lua, or maybe varnish (iirc you yould mark parts of > a > > page cacheable, but dont know if you can deliver asynchronously > though) > > > > > > > > regards, > > > > > > mex > > > > Posted at Nginx Forum: > http://forum.nginx.org/read.php?2,251717,251719#msg-251719 > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251717,251722#msg-251722 From martin.grotzke at googlemail.com Sun Jul 13 18:53:06 2014 From: martin.grotzke at googlemail.com (Martin Grotzke) Date: Sun, 13 Jul 2014 20:53:06 +0200 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: Am 13.07.2014 18:37 schrieb "mex" : > > in your case i'd say the cleanest way would be a reengineering > of your application; the other way would imply a full regex > on every request coming back from your app-servers to filter out > those stuff that already has been send. > the problem: appservers like tomcat/jboss/rails a.s.o. > usually send full html-pages; We're using the play framework, we can easily send partial content using chunked encoding. > if you find a way to just > send the itself, the rest like sending html-headers early > from cache seems easy: > > > location /blah { > content_by_lua ' > ngx.say(html_header) > local res = ngx.location.capture("/get_stuff_from_backend") > if res.status == 200 then > ngx.say(res.body) > end > ngx.say(html_footer) > '; > } The html head, page header and page footer are dynamic as well and depend on the current request (but are easy to calculate - sorry if my previous answer was misleading here). I think the cleanest solution would be if the backend could receive 1 request and just split the content/response into chunks and send what's immediately available (html head + perhaps page header as well) as first chunk and send the rest afterwards. > do you refer to something similar to this? > https://github.com/bigpipe/bigpipe Not exactly this framework but the bigpipe concept. The idea I really like is that the browser can start to download js + CSS and that the user can already see the page header with navigation while the backend is still working - therefore a much better perceived performance. Cheers, Martin > > > > > from what i understand you have a "static" part that should get send > > > early/from > > > cache and a "dynamic" part that must wait for the backend? > > > > Exactly. > > > > Cheers, > > Martin > > > > > the only solution i could think of in such an asynchronous delivery > > > is using nginx + lua, or maybe varnish (iirc you yould mark parts of > > a > > > page cacheable, but dont know if you can deliver asynchronously > > though) > > > > > > > > > > > > regards, > > > > > > > > > mex > > > > > > Posted at Nginx Forum: > > http://forum.nginx.org/read.php?2,251717,251719#msg-251719 > > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251717,251722#msg-251722 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Sun Jul 13 20:01:07 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Mon, 14 Jul 2014 00:01:07 +0400 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: <2245052.eT4UhJb8z0@vbart-laptop> On Sunday 13 July 2014 14:49:18 Martin Grotzke wrote: > Hi all, > > inspired by the bigpipe pattern I'm wondering if it's possible to send the > full html head so that the browser can start downloading CSS and javascript > files. > > An idea would be that the proxied backend uses a chunked encoding and sends > the html head as first chunk. The body would be sent as a separate chunk as > soon as all data is collected. > > Not sure if this is relevant: In our particular case we're using ssi in the > body to assemble the whole page, and some of the includes might take some > time to be loaded. The html head contains an include as well, but this > should always be loaded from the cache or should be served really fast by > the backend. > > What do you think about this? Has anybody tried this already? > Have you tried nginx SSI module? http://nginx.org/en/docs/http/ngx_http_ssi_module.html wbr, Valentin V. Bartenev From lists at ruby-forum.com Sun Jul 13 20:13:54 2014 From: lists at ruby-forum.com (Peter Vandenberghe) Date: Sun, 13 Jul 2014 22:13:54 +0200 Subject: nginx perl cgi In-Reply-To: <200803131323.20909.den.lists@gmail.com> References: <47D91AE6.7070000@llorien.org> <200803131323.20909.den.lists@gmail.com> Message-ID: Denis S. Filimonov wrote in post #645832: > I had the same problem some time ago. > The problem with this script is that it violates CGI specifications > regarding > POST requests. POSTed data must come to a CGI app from the standard > input > while this script parses the input and sets variables as if they came > via a > GET request, passing nothing to app's stdin. Thus, applications that > follow > specs strictly and expect POST data from stdin, fail. > > I've modified the script to fix the problem, feel free to use it. Hi Denis, Do you have an alternative solution for Windows? Syscall and setsid is not working in Windows environments. Regards, Peter -- Posted via http://www.ruby-forum.com/. From mdounin at mdounin.ru Sun Jul 13 21:13:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jul 2014 01:13:50 +0400 Subject: Strange try_files behaviour In-Reply-To: <0d24b03b975f830e5bbbd36733340b7e.NginxMailingListEnglish@forum.nginx.org> References: <2240345.Gq6mm2nolO@vbart-laptop> <0d24b03b975f830e5bbbd36733340b7e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140713211349.GR1849@mdounin.ru> Hello! On Sat, Jul 12, 2014 at 06:23:41AM -0400, itpp2012 wrote: > Hmm, more debugging, this config returns a 404 from the backend (which it > shouldn't): > > try_files $uri $uri/ =404; > set $maintmode S; > if ($remote_addr ~ "^(192.168.*.*)$") { set $maintmode L; } > if (-f $document_root/maintenance_mode.html) { set $maintmode > "${maintmode}M"; } > if ($maintmode = SM) { return 503; } > > This config returns a 404 from nginx, like it should: > > try_files $uri $uri/ =404; > set $maintmode S; > # if ($remote_addr ~ "^(192.168.*.*)$") { set $maintmode L; } > if (-f $document_root/maintenance_mode.html) { set $maintmode > "${maintmode}M"; } > if ($maintmode = SM) { return 503; } > > > So yes it is an IF issue but to my opinion this should not happen. Please re-read the link provided by Valentin: http://wiki.nginx.org/IfIsEvil One of the examples there is exactly your case - if any of the if's (without "return" inside it) in your configuration is matched, the "try_files" won't work. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Sun Jul 13 21:30:12 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jul 2014 01:30:12 +0400 Subject: Nginx + LibreSSL - a first test In-Reply-To: References: Message-ID: <20140713213012.GS1849@mdounin.ru> Hello! On Sun, Jul 13, 2014 at 09:22:39AM -0400, mex wrote: > https://www.mare-system.de/blog/page/1405201517/ Just a quick comment: OpenSSL's libs under ".openssl/" isn't a result of OpenSSL's behaviour, but rather a result of "make install" nginx calls (and the ".openssl" install prefix it instructs OpenSSL to use). > # Summary > > It works. > > While it is not recommended to substitude OpenSSL with LibreSSL in this > early stage, i wanted to test if it is possible. And it is. There are no > functional or performance-issues, as far as i can test, and building nginx + > libressl is easy, once you figured out how to do it. The advantages of using > LibreSSL in the long run, from my point of view: > > - cleaner code > - less bugs > - more people involved Cool. I personally think that LibreSSL has at least one major advantage: coding style looks much better/readable. :) > p.s.: please forgive those typos and bad english; i wanted to get this out > bevore the final final > today, QA has to wait :D Good luck! :) -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Sun Jul 13 21:46:50 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 13 Jul 2014 17:46:50 -0400 Subject: Strange try_files behaviour In-Reply-To: <20140713211349.GR1849@mdounin.ru> References: <20140713211349.GR1849@mdounin.ru> Message-ID: <40f4583c90fcb45da4b18bc526d8a9d8.NginxMailingListEnglish@forum.nginx.org> Ok clear enough, I'd still consider it some kind of bug (it makes no sense for try_files to be disabled when an if matches), for example using map and a single IF does this as well which is more or less nginx's recommended way of doing an IF with map. Funny thing is when you have a .php location inside a / location try_files does work with multiple IF's and a IF match. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251736#msg-251736 From mdounin at mdounin.ru Sun Jul 13 22:20:40 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jul 2014 02:20:40 +0400 Subject: Strange try_files behaviour In-Reply-To: <40f4583c90fcb45da4b18bc526d8a9d8.NginxMailingListEnglish@forum.nginx.org> References: <20140713211349.GR1849@mdounin.ru> <40f4583c90fcb45da4b18bc526d8a9d8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140713222040.GT1849@mdounin.ru> Hello! On Sun, Jul 13, 2014 at 05:46:50PM -0400, itpp2012 wrote: > Ok clear enough, I'd still consider it some kind of bug (it makes no sense > for try_files to be disabled when an if matches), for example using map and > a single IF does this as well which is more or less nginx's recommended way > of doing an IF with map. Sure. The page in question lists various bugs related to "if" (may be except the first one, which is explicitly marked as "not really bug, just how it works"). And we even have a trac ticket for this: http://trac.nginx.org/nginx/ticket/86 > Funny thing is when you have a .php location inside a / location try_files > does work with multiple IF's and a IF match. I suspect you've tested it wrong. -- Maxim Dounin http://nginx.org/ From martin.grotzke at googlemail.com Sun Jul 13 22:27:19 2014 From: martin.grotzke at googlemail.com (Martin Grotzke) Date: Mon, 14 Jul 2014 00:27:19 +0200 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: <2245052.eT4UhJb8z0@vbart-laptop> References: <2245052.eT4UhJb8z0@vbart-laptop> Message-ID: Am 13.07.2014 22:01 schrieb "Valentin V. Bartenev" : > > Have you tried nginx SSI module? > http://nginx.org/en/docs/http/ngx_http_ssi_module.html We're using the SSI module to assemble the page from various backends, but how could SSIs help to send the head or page header early to the client? Cheers, Martin > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From badalex at gmail.com Mon Jul 14 01:58:23 2014 From: badalex at gmail.com (Alex Hunsaker) Date: Sun, 13 Jul 2014 19:58:23 -0600 Subject: Nginx + boringSSL Message-ID: I've started playing around with boringssl with nginx. Mostly everything works except OCSP. Seems like either openssl 1.0.2 which boringssl was forked from does not have it, or the boringssl folk ripped it out. I have not investigated. Anyway, I'm please to report everything seems to work! -- # first boringssl git clone https://boringssl.googlesource.com/boringssl cd boringssl # for when building on openbsd, also enables -O2, boringssl is a debug build by default cat boringssl_openbsd.patch | patch -p1 -N -s mkdir build && cd build && cmake ../ && cd .. # setup stuff for nginx mkdir -p .openssl/lib ln -s include .openssl/ cp build/crypto/libcrypto.a build/ssl/libssl.a .openssl/lib # now for nginx tar xvzf nginx-1.6.0.tar.gz cd nginx-1.6.0 cat ../boringssl_nginx.patch | patch -p1 -N -s ./configure --with-openssl=../boringssl ... # update timestamp so nginx won't try to build openssl touch ../boringssl/.openssl/include/ssl.h make -------------- next part -------------- A non-text attachment was scrubbed... Name: boringssl_nginx.patch Type: application/octet-stream Size: 3157 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: boringssl_openbsd.patch Type: application/octet-stream Size: 1133 bytes Desc: not available URL: From nginx-forum at nginx.us Mon Jul 14 10:47:55 2014 From: nginx-forum at nginx.us (George) Date: Mon, 14 Jul 2014 06:47:55 -0400 Subject: Nginx + boringSSL In-Reply-To: References: Message-ID: <2f6a6bea2797db5dc1700d1e5be8ac8d.NginxMailingListEnglish@forum.nginx.org> Thanks for sharing :) So SPDY/3.1 SSL works ? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251740,251748#msg-251748 From nginx-forum at nginx.us Mon Jul 14 12:23:59 2014 From: nginx-forum at nginx.us (mex) Date: Mon, 14 Jul 2014 08:23:59 -0400 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: > I think the cleanest solution would be if the backend could receive 1 > request and just split the content/response into chunks and send > what's > immediately available (html head + perhaps page header as well) as > first > chunk and send the rest afterwards. sounds tricky ... i must admit, i am not **that** deep into nginx-internals to say if nginx does this already (send-chunks-as-they-arrive) or if it is possible via an additional nginx-module; maybe some of the nginx-guys might answer this? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251717,251751#msg-251751 From nginx-forum at nginx.us Mon Jul 14 12:30:00 2014 From: nginx-forum at nginx.us (mex) Date: Mon, 14 Jul 2014 08:30:00 -0400 Subject: Nginx + LibreSSL - a first test In-Reply-To: <20140713213012.GS1849@mdounin.ru> References: <20140713213012.GS1849@mdounin.ru> Message-ID: > > Just a quick comment: OpenSSL's libs under ".openssl/" isn't a > result of OpenSSL's behaviour, but rather a result of "make > install" nginx calls (and the ".openssl" install prefix it > instructs OpenSSL to use). > maybe we can have a --with-libressl=/path/to/libressl or something more generic soon? i think libressl/boringssl are here to stay > > # Summary > > > > It works. > > > > While it is not recommended to substitude OpenSSL with LibreSSL in > this > > early stage, i wanted to test if it is possible. And it is. There > are no > > functional or performance-issues, as far as i can test, and building > nginx + > > libressl is easy, once you figured out how to do it. The advantages > of using > > LibreSSL in the long run, from my point of view: > > > > - cleaner code > > Good luck! :) mission accomplished, hehe :) regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251718,251752#msg-251752 From mdounin at mdounin.ru Mon Jul 14 12:54:10 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jul 2014 16:54:10 +0400 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: Message-ID: <20140714125410.GV1849@mdounin.ru> Hello! On Sun, Jul 13, 2014 at 02:49:18PM +0200, Martin Grotzke wrote: > Hi all, > > inspired by the bigpipe pattern I'm wondering if it's possible to send the > full html head so that the browser can start downloading CSS and javascript > files. > > An idea would be that the proxied backend uses a chunked encoding and sends > the html head as first chunk. The body would be sent as a separate chunk as > soon as all data is collected. > > Not sure if this is relevant: In our particular case we're using ssi in the > body to assemble the whole page, and some of the includes might take some > time to be loaded. The html head contains an include as well, but this > should always be loaded from the cache or should be served really fast by > the backend. > > What do you think about this? Has anybody tried this already? By default, nginx just sends what's already available. And for SSI, it uses chunked encoding. That is, if a html head is immediately available in your case, it will be just sent to a client. There is a caveat though: the above might not happen due to buffering in various places. Notably, this includes postpone_output and gzip filter. To ensure buffering will not happen you should either disable appropriate filters, or use flushes. Latter is automatically done on each buffer sent when using "proxy_buffering off" ("fastcgi_buffering off" and so on). Flush can be also done explicitly via $r->flush() when when using the embedded perl module. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Mon Jul 14 13:20:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 14 Jul 2014 17:20:38 +0400 Subject: Nginx + LibreSSL - a first test In-Reply-To: References: <20140713213012.GS1849@mdounin.ru> Message-ID: <20140714132038.GW1849@mdounin.ru> Hello! On Mon, Jul 14, 2014 at 08:30:00AM -0400, mex wrote: > > > > Just a quick comment: OpenSSL's libs under ".openssl/" isn't a > > result of OpenSSL's behaviour, but rather a result of "make > > install" nginx calls (and the ".openssl" install prefix it > > instructs OpenSSL to use). > > > > maybe we can have a --with-libressl=/path/to/libressl > or something more generic soon? i think > libressl/boringssl are here to stay May be, but it's not something required - it's just an interface to simplify builds. And in any case we should give them some time to stabilize. [...] > > Good luck! :) > > mission accomplished, hehe :) Congratulations! :) -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jul 14 16:44:17 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 14 Jul 2014 12:44:17 -0400 Subject: Strange try_files behaviour In-Reply-To: <20140713222040.GT1849@mdounin.ru> References: <20140713222040.GT1849@mdounin.ru> Message-ID: <2049eb944e7ec73b801ff330122aaa58.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > And we even have a trac ticket for this: > > http://trac.nginx.org/nginx/ticket/86 A tested workaround with Lua and a single IF with a return then: location ~ \.php$ { try_files $uri $uri/ =404; set $mmode 0; set_by_lua $notused ' s = 0; local source_fname = ngx.var.document_root .. "/maintenance_mode.html"; local file = io.open(source_fname); if file then ngx.var.mmode=1; file:close(); end; s = string.find(ngx.var.remote_addr, "^10.10.20."); if s then ngx.var.mmode=0; end; '; if ($mmode) { return 503; } index index.html index.htm index.php; fastcgi_ignore_client_abort on; fastcgi_pass myLoadBalancer; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251756#msg-251756 From martin.grotzke at googlemail.com Mon Jul 14 18:35:40 2014 From: martin.grotzke at googlemail.com (Martin Grotzke) Date: Mon, 14 Jul 2014 20:35:40 +0200 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: <20140714125410.GV1849@mdounin.ru> References: <20140714125410.GV1849@mdounin.ru> Message-ID: Am 14.07.2014 14:54 schrieb "Maxim Dounin" : > > By default, nginx just sends what's already available. And for > SSI, it uses chunked encoding. I don't understand this. In my understanding SSI (the virtual include directive) goes downstream (e.g. gets some data from a backend) so that the backend defines how to respond to nginx. What does it mean that nginx uses chunked encoding? > That is, if a html head is > immediately available in your case, it will be just sent to a > client. Does it matter if the html head is pulled into the page via SSI or not? > There is a caveat though: the above might not happen due to > buffering in various places. Notably, this includes > postpone_output and gzip filter. To ensure buffering will not > happen you should either disable appropriate filters, or use > flushes. Latter is automatically done on each buffer sent when > using "proxy_buffering off" ("fastcgi_buffering off" and so on). Ok. Might this have a negative impact on my backend when there are slow clients? So that when a client consumes the response very slow my backend is kept "busy" (delivering the response as slow as the client consumes it) and cannot just hand off the data / response to nginx? Thanks && cheers, Martin > Flush can be also done explicitly via $r->flush() when when using > the embedded perl module. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From badalex at gmail.com Mon Jul 14 20:26:35 2014 From: badalex at gmail.com (Alex Hunsaker) Date: Mon, 14 Jul 2014 14:26:35 -0600 Subject: Nginx + boringSSL In-Reply-To: <2f6a6bea2797db5dc1700d1e5be8ac8d.NginxMailingListEnglish@forum.nginx.org> References: <2f6a6bea2797db5dc1700d1e5be8ac8d.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, Jul 14, 2014 at 4:47 AM, George wrote: > Thanks for sharing :) > > So SPDY/3.1 SSL works ? Yep, and so do CHACHA20_POLY130 :D From nginx-forum at nginx.us Mon Jul 14 23:58:45 2014 From: nginx-forum at nginx.us (mex) Date: Mon, 14 Jul 2014 19:58:45 -0400 Subject: Nginx + LibreSSL - a first test (update) In-Reply-To: References: Message-ID: <106fba0552c2b8839e23df824762593d.NginxMailingListEnglish@forum.nginx.org> updated: static version and new perftests included https://www.mare-system.de/blog/page/1405201517/ regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251718,251760#msg-251760 From piotr at cloudflare.com Tue Jul 15 09:17:02 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 15 Jul 2014 02:17:02 -0700 Subject: Nginx + LibreSSL - a first test In-Reply-To: References: Message-ID: Hey, > # Summary > > It works. ...only with versions older than nginx-1.7.0, you need a small patch (attached) in order to compile nginx-mainline against LibreSSL, because LibreSSL developers decided that LibreSSL is OpenSSL-2.0.0... I didn't send this patch to nginx-devel@ yet, because I'm still trying to convince them that LibreSSL should present itself as OpenSSL-1.0.1, in which case no changes to nginx would be necessary. Best regards, Piotr Sikora -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx__libressl.patch Type: application/octet-stream Size: 1551 bytes Desc: not available URL: From nginx-forum at nginx.us Tue Jul 15 09:34:23 2014 From: nginx-forum at nginx.us (itpp2012) Date: Tue, 15 Jul 2014 05:34:23 -0400 Subject: Strange try_files behaviour In-Reply-To: <2049eb944e7ec73b801ff330122aaa58.NginxMailingListEnglish@forum.nginx.org> References: <20140713222040.GT1849@mdounin.ru> <2049eb944e7ec73b801ff330122aaa58.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9aa1478c57bab35a630649a13ce9b43b.NginxMailingListEnglish@forum.nginx.org> With pure Lua and no IF (crossposted in openresty group): location / { try_files $uri $uri/ =404; index index.html index.htm; } location ~ \.php$ { try_files $uri $uri/ =404; rewrite_by_lua ' local s = 0; local v = 0; local source_fname = ngx.var.document_root .. "/maintenance_mode.html"; local file = io.open(source_fname); if file then v=1; file:close(); end; if string.find(ngx.var.remote_addr, "^10.10.30.") then v=0; end; if v>0 then return ngx.exit(503); end; '; index index.html index.htm index.php; fastcgi_ignore_client_abort on; fastcgi_pass myLoadBalancer; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } With nginx for Windows you can use 513 to keep your 503 handler intact. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251650,251777#msg-251777 From pkorathota at atlassian.com Tue Jul 15 10:04:44 2014 From: pkorathota at atlassian.com (Pramod Korathota) Date: Tue, 15 Jul 2014 20:04:44 +1000 Subject: proxied requests hang when DNS response has wrong ident Message-ID: We have recently discovered a very rare occurence when requests through nginx will hang if the resolver sends a response with a mismatching ident. We are seeing this in production with 1.7.1 and I have been able to re-produce with 1.7.3. The relevant parts of the config are: resolver 10.65.255.4; location / { proxy_pass http://$host.internal$request_uri; } So we basically proxy .atlassian.net to .atlassian.net.internal. The resolver is a pdns recursor running on the same machine. The error we see in the logs is: 2014/06/19 20:22:29 [error] 28235#0: wrong ident 57716 response for customer.atlassian.net.internal, expect 39916 2014/06/19 20:22:29 [error] 28235#0: unexpected response for customer.atlassian.net.internal 2014/06/19 20:22:59 [error] 28235#0: *23776286 customer.atlassian.net.internal could not be resolved (110: Operation timed out), client: 83.244.247.165, server: *.atlassian.net, request: "GET /plugins/ HTTP/1.1", host: "customer.atlassian.net", referrer: " https://customer.atlassian.net/secure/Dashboard.jspa" I have been able to re-produce this error in a test environment - this is what I used: - a basic python script pretending to be a recursive resolver, which can mangle the ident of a response. The resolver directive of nginx is pointed to this recursor. I added in a delay of 100ms before sending a reply (based on http://code.activestate.com/recipes/491264-mini-fake-dns-server/). - A proxy configuration same as above - only the resolver and location/proxy_pass line was added to a default nginx config - Static webserver as the backend - GNU parallel + curl to issue concurrent requests When the ident is correct, the system behaves as expected. However, if an ident is incorrect, AND nginx gets multiple concurrent (5) requests for that same backend, we see all the requests hanging. Doing a tcpdump for DNS traffic shows the first request go out, and the response coming back with the wrong ident, but no subsequent dns requests. The critical factor seems to be multiple incoming requests to nginx, while a dns request is in-flight. If needed I can provide all the scripts and config I used to produce the error. Thanks! Pramod Korathota -------------- next part -------------- An HTML attachment was scrubbed... URL: From ru at nginx.com Tue Jul 15 11:41:45 2014 From: ru at nginx.com (Ruslan Ermilov) Date: Tue, 15 Jul 2014 15:41:45 +0400 Subject: proxied requests hang when DNS response has wrong ident In-Reply-To: References: Message-ID: <20140715114145.GA12772@lo0.su> On Tue, Jul 15, 2014 at 08:04:44PM +1000, Pramod Korathota wrote: > We have recently discovered a very rare occurence when requests through > nginx will hang if the resolver sends a response with a mismatching ident. > We are seeing this in production with 1.7.1 and I have been able to > re-produce with 1.7.3. The relevant parts of the config are: > > resolver 10.65.255.4; > > location / { > proxy_pass http://$host.internal$request_uri; > } ???????? ???????. ???? ?????-?? ?????: # HG changeset patch # User Ruslan Ermilov # Date 1405424486 -14400 # Tue Jul 15 15:41:26 2014 +0400 # Node ID 8a16ec3871efad5990604a21c6bc00c0c9347446 # Parent abd460ece11e9c85d4c0c4a8e6ac46cfb5fa62b5 Resolver: fixed resend on malformed responses. DNS request resend on malformed responses was broken in 98876ce2a7fd. Reported by Pramod Korathota. diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c --- a/src/core/ngx_resolver.c +++ b/src/core/ngx_resolver.c @@ -1467,7 +1467,6 @@ ngx_resolver_process_a(ngx_resolver_t *r goto failed; } - rn->naddrs6 = 0; qident = (rn->query6[0] << 8) + rn->query6[1]; break; @@ -1482,7 +1481,6 @@ ngx_resolver_process_a(ngx_resolver_t *r goto failed; } - rn->naddrs = 0; qident = (rn->query[0] << 8) + rn->query[1]; } @@ -1507,6 +1505,8 @@ ngx_resolver_process_a(ngx_resolver_t *r case NGX_RESOLVE_AAAA: + rn->naddrs6 = 0; + if (rn->naddrs == (u_short) -1) { goto next; } @@ -1519,6 +1519,8 @@ ngx_resolver_process_a(ngx_resolver_t *r default: /* NGX_RESOLVE_A */ + rn->naddrs = 0; + if (rn->naddrs6 == (u_short) -1) { goto next; } @@ -1539,6 +1541,8 @@ ngx_resolver_process_a(ngx_resolver_t *r case NGX_RESOLVE_AAAA: + rn->naddrs6 = 0; + if (rn->naddrs == (u_short) -1) { rn->code = (u_char) code; goto next; @@ -1548,6 +1552,8 @@ ngx_resolver_process_a(ngx_resolver_t *r default: /* NGX_RESOLVE_A */ + rn->naddrs = 0; + if (rn->naddrs6 == (u_short) -1) { rn->code = (u_char) code; goto next; @@ -1817,6 +1823,25 @@ ngx_resolver_process_a(ngx_resolver_t *r } } + switch (qtype) { + +#if (NGX_HAVE_INET6) + case NGX_RESOLVE_AAAA: + + if (rn->naddrs6 == (u_short) -1) { + rn->naddrs6 = 0; + } + + break; +#endif + + default: /* NGX_RESOLVE_A */ + + if (rn->naddrs == (u_short) -1) { + rn->naddrs = 0; + } + } + if (rn->naddrs != (u_short) -1 #if (NGX_HAVE_INET6) && rn->naddrs6 != (u_short) -1 From nginx-forum at nginx.us Tue Jul 15 11:46:49 2014 From: nginx-forum at nginx.us (cobain86) Date: Tue, 15 Jul 2014 07:46:49 -0400 Subject: change stub status layout Message-ID: <470d2acf95829113dd71949be1f6b2a3.NginxMailingListEnglish@forum.nginx.org> hi is there any way to change the stub status website layout i need the values in [vlaue] form (so with the [] on the website.) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251791,251791#msg-251791 From nginx-forum at nginx.us Tue Jul 15 11:52:49 2014 From: nginx-forum at nginx.us (mex) Date: Tue, 15 Jul 2014 07:52:49 -0400 Subject: Nginx + LibreSSL - a first test In-Reply-To: References: Message-ID: <0d380e3a164a9d66f16401d2865de09a.NginxMailingListEnglish@forum.nginx.org> Piotr Sikora Wrote: ------------------------------------------------------- > Hey, > > > # Summary > > > > It works. > > ....only with versions older than nginx-1.7.0, you need a small patch > (attached) in order to compile nginx-mainline against LibreSSL, > because LibreSSL developers decided that LibreSSL is OpenSSL-2.0.0... > I didn't send this patch to nginx-devel@ yet, because I'm still trying > to convince them that LibreSSL should present itself as OpenSSL-1.0.1, > in which case no changes to nginx would be necessary. not just nginx., but maybe other software too that got used to that versioning-scheme. just checked opensslv.h, and it is different in 2.0.1 from 2.0.0: #define LIBRESSL_VERSION_NUMBER 0x20000000L #define OPENSSL_VERSION_NUMBER 0x20000000L i dont know what happens when changing #define OPENSSL_VERSION_NUMBER 0x10002002L the openssl-binary compiles find, but i cannot check with nginx-mainline right now, maybe later thanks for the patch! regards, mex regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251718,251793#msg-251793 From mdounin at mdounin.ru Tue Jul 15 12:38:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Jul 2014 16:38:04 +0400 Subject: Nginx + LibreSSL - a first test In-Reply-To: References: Message-ID: <20140715123804.GA1849@mdounin.ru> Hello! On Tue, Jul 15, 2014 at 02:17:02AM -0700, Piotr Sikora wrote: > Hey, > > > # Summary > > > > It works. > > ...only with versions older than nginx-1.7.0, you need a small patch > (attached) in order to compile nginx-mainline against LibreSSL, > because LibreSSL developers decided that LibreSSL is OpenSSL-2.0.0... > I didn't send this patch to nginx-devel@ yet, because I'm still trying > to convince them that LibreSSL should present itself as OpenSSL-1.0.1, > in which case no changes to nginx would be necessary. BTW, this is what was done in FreeBSD port of LibreSSL: http://svnweb.freebsd.org/ports/head/security/libressl/files/patch-include-openssl-opensslv.h?view=log It looks like a proper way to go. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue Jul 15 12:45:04 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 15 Jul 2014 16:45:04 +0400 Subject: Is it possible to send html HEAD early (chunked)? In-Reply-To: References: <20140714125410.GV1849@mdounin.ru> Message-ID: <20140715124504.GB1849@mdounin.ru> Hello! On Mon, Jul 14, 2014 at 08:35:40PM +0200, Martin Grotzke wrote: > Am 14.07.2014 14:54 schrieb "Maxim Dounin" : > > > > By default, nginx just sends what's already available. And for > > SSI, it uses chunked encoding. > > I don't understand this. In my understanding SSI (the virtual include > directive) goes downstream (e.g. gets some data from a backend) so that the > backend defines how to respond to nginx. What does it mean that nginx uses > chunked encoding? The transfer encoding is something what happens on hop-by-hop basis, and a backend can't define transfer encoding used between nginx and the client. The transfer encoding is selected by nginx as appropriate - if Content-Length is know it will be identity (or rather no transfer encoding at all), if it's not known (and the client uses HTTP/1.1) - chunked will be used. In case of SSI, content length isn't known in advance due to SSI processing yet to happen, and hence chunked transfer encoding will be used. > > That is, if a html head is > > immediately available in your case, it will be just sent to a > > client. > > Does it matter if the html head is pulled into the page via SSI or not? It doesn't matter. > > There is a caveat though: the above might not happen due to > > buffering in various places. Notably, this includes > > postpone_output and gzip filter. To ensure buffering will not > > happen you should either disable appropriate filters, or use > > flushes. Latter is automatically done on each buffer sent when > > using "proxy_buffering off" ("fastcgi_buffering off" and so on). > > Ok. Might this have a negative impact on my backend when there are slow > clients? So that when a client consumes the response very slow my backend > is kept "busy" (delivering the response as slow as the client consumes it) > and cannot just hand off the data / response to nginx? Yes, switching off proxy buffering may have negative effects on some workloads and it is not generally recommended. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jul 15 13:15:51 2014 From: nginx-forum at nginx.us (devsathish) Date: Tue, 15 Jul 2014 09:15:51 -0400 Subject: Nginx multiple php sites Message-ID: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> Hi, I've been searching (last two days) to find how to setup multiple php sites using nginx. Couldn't find any documentation around it, hence writing it here. I've two codeignetor (php framework) in two different folders and served from same domain. For an example, example.com => /var/www/example/index.php example.com/blog => /var/www/example/blog/index.php I made the first one to work using the following code. Can someone help me to modify this config to fit the second requirement?! (the configurations I tried doesn't pass the query strings - hence pasting the working version) server { listen 80; server_name example.com; location / { root /var/www/example/; try_files $uri $uri/ /index.php?$args; index index.php index.html index.htm; } location ~^ \.php$ { root /var/www/example/; fastcgi_pass 127.0.0.1:9000; index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } Thanks in advance. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251802,251802#msg-251802 From anoopalias01 at gmail.com Tue Jul 15 14:28:12 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 15 Jul 2014 19:58:12 +0530 Subject: Nginx multiple php sites In-Reply-To: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> References: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, Simply changing server_name, root should work . If you are using a different port for different fastcgi process pool you have to change that too. -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Tue Jul 15 14:31:03 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 15 Jul 2014 20:01:03 +0530 Subject: Nginx multiple php sites In-Reply-To: References: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: Perhaps this didnt work for you because you have root /var/www/example/; in the php location . Move up the root in location / to the server {} level and delete the root from php location . This is also the best approach as per nginX docs -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Tue Jul 15 16:15:22 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 15 Jul 2014 18:15:22 +0200 Subject: Nginx multiple php sites In-Reply-To: References: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: I also think CodeIgniter needs the PATH_INFO environment variable set. Moreover, there is a potential security breach in your current configuration . The recommended way (when using the same machine for front-end and back-end services) it to use try_files to check for the existence of the invoked PHP script. However, since using try_files with fastcgi_split_pathinfo wreaks havoc (that *won't fix*, not a bug... oO) , I would recommend patching you PHP location like the following: location ~^ \.php$ { # root /var/www/example/; # You should follow Anoop piece of advice index index.php; fastcgi_pass 127.0.0.1:9000; fastcgi_split_path_info ^(.+\.php)(/.*?)$; # $fastcgi_script_name is a system path to a file, # while $uri is... an URI which might not always correspond to a file try_files $fastcgi_script_name =404; # Checking PHP script existence # A trick to go beyond the bug/'feature' using try_files with fastcgi_split_pathinfo set $path_info $fastcgi_path_info fastcgi_param PATH_INFO $path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } Hope I helped, --- *B. R.* On Tue, Jul 15, 2014 at 4:31 PM, Anoop Alias wrote: > Perhaps this didnt work for you because you have > > root /var/www/example/; > > in the php location . Move up the root in location / to the server {} > level and delete the root from php location . This is also the best > approach as per nginX docs > > > > -- > *Anoop P Alias* > GNUSYS > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 15 17:37:58 2014 From: nginx-forum at nginx.us (ensing) Date: Tue, 15 Jul 2014 13:37:58 -0400 Subject: CORS headers not being set for a 401 response from upstream. In-Reply-To: <20140610113343.GS1849@mdounin.ru> References: <20140610113343.GS1849@mdounin.ru> Message-ID: <613a44505495421df0ac7dc7a023e059.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim for the documentation link. I am running into a similar issue. But it is for a 504 server time out. I'm doing a long-poll, but it is a cross domain long poll GET request. The client implementation is trying to use CORS. It all works fine with when the GET requests returns something. But when the server times out (HTTP 504) there is no CORS header information on the reply and the client code treats it as: 'No 'Access-Control-Allow-Origin' header is present on the requested resource' Just extending the keepalive_timeout indefinitely is also not a good idea. So what is the recommended way to handle a 504. It seems I don't get this in the client side XmlHttpRequest. The exceptions occurs before. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250740,251830#msg-251830 From artemrts at ukr.net Tue Jul 15 17:58:24 2014 From: artemrts at ukr.net (wishmaster) Date: Tue, 15 Jul 2014 20:58:24 +0300 Subject: Nginx multiple php sites In-Reply-To: References: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1405446725.770754595.ueb4zv35@frv34.fwdcdn.com> --- Original message --- From: "B.R." Date: 15 July 2014, 19:16:19 > > > I also think CodeIgniter needs the PATH_INFO environment variable set. This is not true, as you can choose of using PATH_INFO, QUERY_STRING, REQUEST_URI and so on, so this tricks with path_info is excessive. Just use try_files and fastcgi_split_path_info > > > Moreover, there is a potential security breach in your current configuration. > > The recommended way (when using the same machine for front-end and back-end services) it to use try_files to check for the existence of the invoked PHP script. However, since using try_files with fastcgi_split_pathinfo wreaks havoc (that *won't fix*, not a bug... oO), I would recommend patching you PHP location like the following: > > location ~^ \.php$ { > # root /var/www/example/; # You should follow Anoop piece of advice > index index.php; > fastcgi_pass 127.0.0.1:9000; > > fastcgi_split_path_info ^(.+\.php)(/.*?)$; > # $fastcgi_script_name is a system path to a file, > # while $uri is... an URI which might not always correspond to a file > try_files $fastcgi_script_name =404; # Checking PHP script existence > > # A trick to go beyond the bug/'feature' using try_files with fastcgi_split_pathinfo > set $path_info $fastcgi_path_info > fastcgi_param PATH_INFO $path_info; > > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > include fastcgi_params; > } > > > > Hope I helped, > > > --- > B. R. > > > On Tue, Jul 15, 2014 at 4:31 PM, Anoop Alias wrote: > Perhaps this didnt work for you because you have? root ?/var/www/example/; in the php location . Move up the root in location / to the server {} ?level and delete the root from php location . This is also the best approach as per nginX docs -- Anoop P Alias? GNUSYS _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From piotr at cloudflare.com Tue Jul 15 18:06:59 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 15 Jul 2014 11:06:59 -0700 Subject: Nginx + LibreSSL - a first test In-Reply-To: <0d380e3a164a9d66f16401d2865de09a.NginxMailingListEnglish@forum.nginx.org> References: <0d380e3a164a9d66f16401d2865de09a.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hey, > just Checked opensslv.h, and to the different in 2.0.1 from 2.0.0: > > # Define LIBRESSL_VERSION_NUMBER 0x20000000L > # Define OPENSSL_VERSION_NUMBER 0x20000000L They've added this as a way to differentiate between OpenSSL and LibreSSL. > i dont know whens Changing What Happens > # Define OPENSSL_VERSION_NUMBER 0x10002002L It should be the version they forked from (i.e. 0x1000107fL). Multiple people have complained about the OPENSSL_VERSION_NUMBER change, so hopefully they'll change it back... If not, then we can always apply the patch I provided. Best regards, Piotr Sikora From reallfqq-nginx at yahoo.fr Tue Jul 15 19:41:44 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 15 Jul 2014 21:41:44 +0200 Subject: Nginx multiple php sites In-Reply-To: <1405446725.770754595.ueb4zv35@frv34.fwdcdn.com> References: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> <1405446725.770754595.ueb4zv35@frv34.fwdcdn.com> Message-ID: If you use try_files with fastcgi_split_path_info, do not try to set PATH_INFO with $fastcgi_path_info directly as it will be empty. If PATH_INFO is not set (as it might not be required like you point it out) and stick with the $fastcgi_script_name of the split directive, then the PATH_INFO 2-lines trick is indeed useless but the rest is correct and recommended. --- *B. R.* On Tue, Jul 15, 2014 at 7:58 PM, wishmaster wrote: > > > > --- Original message --- > From: "B.R." > Date: 15 July 2014, 19:16:19 > > > > > > > > > I also think CodeIgniter needs the PATH_INFO environment variable set. > > This is not true, as you can choose of using PATH_INFO, QUERY_STRING, > REQUEST_URI and so on, so this tricks with path_info is excessive. > Just use try_files and fastcgi_split_path_info > > > > > > > Moreover, there is a potential security breach in your current > configuration. > > > > The recommended way (when using the same machine for front-end and > back-end services) it to use try_files to check for the existence of the > invoked PHP script. However, since using try_files with > fastcgi_split_pathinfo wreaks havoc (that *won't fix*, not a bug... oO), I > would recommend patching you PHP location like the following: > > > > location ~^ \.php$ { > > # root /var/www/example/; # You should follow Anoop piece of advice > > index index.php; > > fastcgi_pass 127.0.0.1:9000; > > > > fastcgi_split_path_info ^(.+\.php)(/.*?)$; > > # $fastcgi_script_name is a system path to a file, > > # while $uri is... an URI which might not always correspond to a file > > try_files $fastcgi_script_name =404; # Checking PHP script existence > > > > # A trick to go beyond the bug/'feature' using try_files with > fastcgi_split_pathinfo > > set $path_info $fastcgi_path_info > > fastcgi_param PATH_INFO $path_info; > > > > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > > include fastcgi_params; > > } > > > > > > > > Hope I helped, > > > > > > --- > > B. R. > > > > > > On Tue, Jul 15, 2014 at 4:31 PM, Anoop Alias wrote: > > > Perhaps this didnt work for you because you have > > > root /var/www/example/; > > > > in the php location . Move up the root in location / to the server {} > level and delete the root from php location . This is also the best > approach as per nginX docs > > > > > > > -- > Anoop P Alias > GNUSYS > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 15 22:21:13 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Jul 2014 02:21:13 +0400 Subject: CORS headers not being set for a 401 response from upstream. In-Reply-To: <613a44505495421df0ac7dc7a023e059.NginxMailingListEnglish@forum.nginx.org> References: <20140610113343.GS1849@mdounin.ru> <613a44505495421df0ac7dc7a023e059.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140715222113.GF1849@mdounin.ru> Hello! On Tue, Jul 15, 2014 at 01:37:58PM -0400, ensing wrote: > Thanks Maxim for the documentation link. > > I am running into a similar issue. But it is for a 504 server time out. > > I'm doing a long-poll, but it is a cross domain long poll GET request. The > client implementation is trying to use CORS. > It all works fine with when the GET requests returns something. > But when the server times out (HTTP 504) there is no CORS header information > on the reply and the client code treats it as: > 'No 'Access-Control-Allow-Origin' header is present on the requested > resource' > > Just extending the keepalive_timeout indefinitely is also not a good idea. > > So what is the recommended way to handle a 504. It seems I don't get this in > the client side XmlHttpRequest. The exceptions occurs before. The message in question isn't an exception. Rather, it's an error message in your javascript console. All other code works as intended - with the exception that it can't access the response returned. From client code point of view, this is mostly identical to network connectivity problem. And you have to handle such problems anyway (and likely in the same way). In javascript, the code should test the "status" property of the XMLHttpRequest object to find out if the request was successful or not, see here: http://www.w3.org/TR/XMLHttpRequest/#the-status-attribute -- Maxim Dounin http://nginx.org/ From pkorathota at atlassian.com Wed Jul 16 02:01:07 2014 From: pkorathota at atlassian.com (Pramod Korathota) Date: Wed, 16 Jul 2014 12:01:07 +1000 Subject: proxied requests hang when DNS response has wrong ident In-Reply-To: <20140715114145.GA12772@lo0.su> References: <20140715114145.GA12772@lo0.su> Message-ID: On 15 July 2014 21:41, Ruslan Ermilov wrote: > diff --git a/src/core/ngx_resolver.c b/src/core/ngx_resolver.c > --- a/src/core/ngx_resolver.c > +++ b/src/core/ngx_resolver.c > Thanks for the quick response and patch, Ruslan. I have tested a build incorporating this patch, and it behaves as expected, the resolver retrying rather than blocking behind the first request. I will get this build out to our production environment this week. Will report back if there are any issues. Thanks again! Pramod, -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Wed Jul 16 06:14:54 2014 From: nginx-forum at nginx.us (quoter) Date: Wed, 16 Jul 2014 02:14:54 -0400 Subject: if-none-match with proxy_cache : properly set headers In-Reply-To: <20130530112144.GR72282@mdounin.ru> References: <20130530112144.GR72282@mdounin.ru> Message-ID: <886dd7718946e491e2038470e78389be.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Normally you shouldn't cache 304 responses from a backend, but > rather cache 200 responses from a backend and let nginx to return > 304 by its own. This is how it works by default. > > Do you have problems with the default aproach? > > -- > Maxim Dounin > http://nginx.org/en/donation.html > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Yes I do have some problems with this approach. In our nginx config we use also option "fastcgi_cache_min_uses 5". So response body appears in cache only after 5 user requests. But for our production purposes it is extremely important to return 304 if client response has valid If-None-Match. --- Dmitry Sukhov quoter at yandex-team.ru Posted at Nginx Forum: http://forum.nginx.org/read.php?2,239689,251853#msg-251853 From mdounin at mdounin.ru Wed Jul 16 13:39:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 16 Jul 2014 17:39:28 +0400 Subject: if-none-match with proxy_cache : properly set headers In-Reply-To: <886dd7718946e491e2038470e78389be.NginxMailingListEnglish@forum.nginx.org> References: <20130530112144.GR72282@mdounin.ru> <886dd7718946e491e2038470e78389be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140716133928.GM1849@mdounin.ru> Hello! On Wed, Jul 16, 2014 at 02:14:54AM -0400, quoter wrote: > Maxim Dounin Wrote: > ------------------------------------------------------- > > Normally you shouldn't cache 304 responses from a backend, but > > rather cache 200 responses from a backend and let nginx to return > > 304 by its own. This is how it works by default. > > > > Do you have problems with the default aproach? > > Yes I do have some problems with this approach. In our nginx config we use > also option "fastcgi_cache_min_uses 5". So response body appears in cache > only after 5 user requests. But for our production purposes it is extremely > important to return 304 if client response has valid If-None-Match. Yes, ..._cache_min_uses can lead to suboptimal behaviour in such a case. It is planned to improve things to just pass If-* headers if caching was disabled due to ..._cache_min_uses. No ETA though. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jul 17 09:34:19 2014 From: nginx-forum at nginx.us (vocativus) Date: Thu, 17 Jul 2014 05:34:19 -0400 Subject: Limit connections to endpoint Message-ID: Hello, I have a situation: opening an endpoint under location: /api/info uses a lot of resources. If ~20 people open it, the service goes down. For a several months it is impossible to improve 'info' to not kill the servce, so I have to "repair" it in other way. It would be perfect if in nginx could limit connections to ~10 requests/seconf, from all IP to this specific endpoint. Is it possible? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251870,251870#msg-251870 From vbart at nginx.com Thu Jul 17 09:44:47 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Jul 2014 13:44:47 +0400 Subject: Limit connections to endpoint In-Reply-To: References: Message-ID: <4100896.xfUUiHVcim@vbart-workstation> On Thursday 17 July 2014 05:34:19 vocativus wrote: > Hello, > > I have a situation: opening an endpoint under location: /api/info uses a lot > of resources. If ~20 people open it, the service goes down. For a several > months it is impossible to improve 'info' to not kill the servce, so I have > to "repair" it in other way. > > It would be perfect if in nginx could limit connections to ~10 > requests/seconf, from all IP to this specific endpoint. Is it possible? > http://nginx.org/en/docs/http/ngx_http_limit_req_module.html wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Jul 17 10:57:51 2014 From: nginx-forum at nginx.us (vocativus) Date: Thu, 17 Jul 2014 06:57:51 -0400 Subject: Limit connections to endpoint In-Reply-To: <4100896.xfUUiHVcim@vbart-workstation> References: <4100896.xfUUiHVcim@vbart-workstation> Message-ID: <4a68a5b69fe7303223135003f268b705.NginxMailingListEnglish@forum.nginx.org> ok I found that yesterday, and as a variable in limit_req_zone I should use some constant eg: set $con 10; and it looks like: set $con 10; limit_req_zone $con zone=one:15m rate=10r/s; and it should work as I want? Aking, because I'm testnig it now, and it dont work properly. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251870,251875#msg-251875 From vbart at nginx.com Thu Jul 17 11:06:50 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 17 Jul 2014 15:06:50 +0400 Subject: Limit connections to endpoint In-Reply-To: <4a68a5b69fe7303223135003f268b705.NginxMailingListEnglish@forum.nginx.org> References: <4100896.xfUUiHVcim@vbart-workstation> <4a68a5b69fe7303223135003f268b705.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1784455.LfORLG3fCJ@vbart-workstation> On Thursday 17 July 2014 06:57:51 vocativus wrote: > ok I found that yesterday, and as a variable in limit_req_zone I should use > some constant eg: set $con 10; > > and it looks like: > > set $con 10; > limit_req_zone $con zone=one:15m rate=10r/s; You don't need "set" to create a constant. There are already a few constants always available like $nginx_version. http://nginx.org/en/docs/http/ngx_http_core_module.html#variables > > and it should work as I want? Aking, because I'm testnig it now, and it dont > work properly. > Could you elaborate what's the problem, and provide your configuration? wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Jul 17 12:15:11 2014 From: nginx-forum at nginx.us (email_ardi) Date: Thu, 17 Jul 2014 08:15:11 -0400 Subject: Nginx can not upload image to cloudinary (hosted in AWS) Message-ID: Hi, Im new in Nginx. After read few articles, i think Nginx is faster than apache (if i'm not mistaken) Currently we're developing a project in apache. But when we set up a "user acceptance test", our clients complain that our site is running slow. So we migrate to nginx, everything seems to be ok, but when i want to upload a file (using cloudinary), we never success. Previously when we're in apache, this feature is ok. we host our site in AWS here are the log file.. any thoughts will be very helpful 2014/07/17 17:00:39 [error] 25448#0: *290 FastCGI sent in stderr: "PHP message: PHP Fatal error: Uncaught exception 'RequestCore_Exception' with message 'The stream size for the streaming upload cannot be determined.' in /home/ubuntu/www/mysite/core/model/aws/lib/requestcore/requestcore.class.php:704 Stack trace: #0 /home/ubuntu/www/mysite/core/model/aws/lib/requestcore/requestcore.class.php(819): RequestCore->prep_request() #1 /home/ubuntu/www/mysite/core/model/aws/services/s3.class.php(723): RequestCore->send_request() #2 /home/ubuntu/www/mysite/core/model/aws/services/s3.class.php(1230): AmazonS3->authenticate('yap', Array) #3 /home/ubuntu/www/mysite/core/model/aws/modaws.class.php(115): AmazonS3->create_object('yap', 'reviews/ee5309f...', Array) #4 /home/ubuntu/www/mysite/web_assets/includes/fineuploader/upload.php(85): modAws->uploadSingle('/home/ubuntu/ww...', 'reviews/') #5 {main} thrown in /home/ubuntu/www/mysite/core/model/aws/lib/requestcore/requestcore.class.php on line 704" while reading response header from upstream, client: 118.137.4.63, server: mysite.com, request: "POST /web_assets/includes/fineuploader/upload.php?qquuid=ccb8962f-347b-4d33-8c37-865f29983981&qqtotalfilesize=6562&qqfile=not+jack+sparrow.jpg HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "www.mysite.com", referrer: " Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251881,251881#msg-251881 From nginx-forum at nginx.us Fri Jul 18 13:40:43 2014 From: nginx-forum at nginx.us (martyparish) Date: Fri, 18 Jul 2014 09:40:43 -0400 Subject: Multiple sites under same domain with one app codebase Message-ID: <319a18ac449dd61142d00532ff464491.NginxMailingListEnglish@forum.nginx.org> Hello nginx gurus, First post but I have been reading this board for over a year! My scenario: I have many separate "sites" to run and they all use the same back end application(opencart). They also will be under the same domain, in folders. I don't want to duplicate the application code in each subdirectory. I've got it working with a few sites and it works well. However, I will have ~200 sites soon. Example sites: domain.com/site1/ domain.com/site2/ etc... My main code base is located at the root of domain.com. I am currently using location rewrite blocks for each site. That will get ugly when I have 200 sites, and a pain when I want to add new sites. Can anyone help me with a "dynamic" configuration so I don't have to edit conf files each time? =========================== server { listen 80; server_name domain.com; root /etc/nginx/html/development; error_log /var/log/nginx/dev.error.log; index index.php; rewrite ^([^.]*[^/])$ $1/ permanent; # trailing slash # HERE IS THE CODE I WANT TO FIX location ^~ /site1/ { rewrite ^/site1/(.*) /$1; } location ^~ /site2/ { rewrite ^/site2/(.*) /$1; } location ^~ /site3/ { rewrite ^/site3/(.*) /$1; } } ============================= Could I use map module, or possibly a single location regex? Thanks! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251908#msg-251908 From nginx-forum at nginx.us Fri Jul 18 13:53:34 2014 From: nginx-forum at nginx.us (martyparish) Date: Fri, 18 Jul 2014 09:53:34 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <319a18ac449dd61142d00532ff464491.NginxMailingListEnglish@forum.nginx.org> References: <319a18ac449dd61142d00532ff464491.NginxMailingListEnglish@forum.nginx.org> Message-ID: I have been experimenting with this: location ^~ /[a-zA-Z]+/ { rewrite ^/(.*)/(.*) /$2; } Of course I am getting 404 errors... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251909#msg-251909 From francis at daoine.org Fri Jul 18 14:34:53 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 18 Jul 2014 15:34:53 +0100 Subject: change stub status layout In-Reply-To: <470d2acf95829113dd71949be1f6b2a3.NginxMailingListEnglish@forum.nginx.org> References: <470d2acf95829113dd71949be1f6b2a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140718143453.GI16942@daoine.org> On Tue, Jul 15, 2014 at 07:46:49AM -0400, cobain86 wrote: Hi there, > hi is there any way to change the stub status website layout > > i need the values in [vlaue] form (so with the [] on the website.) I don't believe there is a configuration way to change the output of that module. You could make a private patch to the module to make the output be exactly what you want; or you could make a public patch to the module to allow the output be configurable (to include at least "what it does now" and "what you want"), and see if that is interesting to the nginx developers; or you could try filtering the module output to make it match what you want. I guess that one of the embedded language might allow you to do that. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jul 18 14:39:00 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 18 Jul 2014 15:39:00 +0100 Subject: Nginx multiple php sites In-Reply-To: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> References: <9fcf25eeec03b936450145911c91daa8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140718143900.GJ16942@daoine.org> On Tue, Jul 15, 2014 at 09:15:51AM -0400, devsathish wrote: Hi there, > example.com => /var/www/example/index.php > example.com/blog => /var/www/example/blog/index.php > > I made the first one to work using the following code. Can someone help me > to modify this config to fit the second requirement?! What request do you make, that does not return the response that you expect? > location / { > root /var/www/example/; > try_files $uri $uri/ /index.php?$args; > index index.php index.html index.htm; > } I suspect that you will want some requests to "fall back" to /blog/index.php instead of /index.php. Possibly using a "location /blog/ {}" would help with that. If that doesn't point you in the right direction, can you be very specific in one example of what you do / what you get / what you want to get? f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jul 18 14:44:16 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 18 Jul 2014 15:44:16 +0100 Subject: Nginx can not upload image to cloudinary (hosted in AWS) In-Reply-To: References: Message-ID: <20140718144416.GK16942@daoine.org> On Thu, Jul 17, 2014 at 08:15:11AM -0400, email_ardi wrote: Hi there, There's not much to do with nginx in this question. You probably want to compare the environment that your php script is executed in, in the apache system ($REQUEST and $SERVER are usually interesting), and in the system that involves nginx (a php fastcgi server of some kind?). > 2014/07/17 17:00:39 [error] 25448#0: *290 FastCGI sent in stderr: "PHP > message: PHP Fatal error: Uncaught exception 'RequestCore_Exception' with > message 'The stream size for the streaming upload cannot be determined.' in > /home/ubuntu/www/mysite/core/model/aws/lib/requestcore/requestcore.class.php:704 How does that php file try to determine the stream size? How does that compare to things available in your apache system? How does that compare to things missing from your non-apache system? After you determine that, then it might be possible to configure nginx to include the relevant missing thing. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Fri Jul 18 14:51:40 2014 From: francis at daoine.org (Francis Daly) Date: Fri, 18 Jul 2014 15:51:40 +0100 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <319a18ac449dd61142d00532ff464491.NginxMailingListEnglish@forum.nginx.org> References: <319a18ac449dd61142d00532ff464491.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140718145140.GL16942@daoine.org> On Fri, Jul 18, 2014 at 09:40:43AM -0400, martyparish wrote: Hi there, > I have many separate "sites" to run and they all use the same back end > application(opencart). They also will be under the same domain, in folders. > I don't want to duplicate the application code in each subdirectory. It is not clear to me exactly what you have, and what you have working, and what you have not working. Could you give some specific examples? So: If I request /site1/img/file.png, what file on the filesystem should nginx send me; or what file on the filesystem should nginx invite a fastcgi server to process? /usr/local/nginx/html/site1/img/file.png; or /usr/local/nginx/html/img/file.png, or something else? If I request /site2/dir/file.php -- same question? And in each case: if that file does not exist, what should happen? 404 or other failure, or fall back to a specific other file? f -- Francis Daly francis at daoine.org From nginx-forum at nginx.us Fri Jul 18 15:17:22 2014 From: nginx-forum at nginx.us (martyparish) Date: Fri, 18 Jul 2014 11:17:22 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <20140718145140.GL16942@daoine.org> References: <20140718145140.GL16942@daoine.org> Message-ID: <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> Thanks Francis! Sorry for not being clear enough. My code base is at: /etc/nginx/html/production/ The "site1, site2, etc..." folders do not exists at all. They are only seen and used in the public URLs. The original config I posted in the first post works, I just don't like it because there will be hundreds of "sitesX". It is using a location block with rewrite for each site. so, if I request domain.com/site1/index.php The file that is served is /etc/nginx/html/production/index.php (not /etc/nginx/html/production/site1/index.php) Same for ALL sites. PHP processed the /site1/ path and loads the configuration for that particular site. So, really nothing is "broken", it is just going to be extremely ugly and possibly slow to process 200+ location blocks. I'd like to use a location regex that could handle ALL sites in one location block if possible. Or, I was thinking "map" may work but not sure how I could configure it. Thanks again for the help and let me know if this makes it more clear. Marty Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251916#msg-251916 From nginx-forum at nginx.us Fri Jul 18 15:50:23 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 18 Jul 2014 11:50:23 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> Message-ID: Maybe not exactly what your looking for but should give enough to rewrite this for what you want with map. https://gist.github.com/anonymous/68caceb6c935e7120a60 Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251917#msg-251917 From nginx-forum at nginx.us Fri Jul 18 16:08:42 2014 From: nginx-forum at nginx.us (martyparish) Date: Fri, 18 Jul 2014 12:08:42 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> Thanks itpp2012 I was actually just experimenting with that approach! I was beginning to have a bit of success too! Not quite there yet though. map $uri $site_folder { #~^/(?P[a-zA-Z]+)/.* $folder; ~^/(?P[a-z]+)/.* $folder; } I saw an example very similar(or same) as what you gave me on a wordpress multi site page. I think my regex is a bit messed up but I will keep on going down this route. Looks very promising!!! ** I think my pcre is old, as I have to use the ?P syntax. At one point, I was able to capture the folder name into the variable but ONLY if there was no characters after the slash (domain.com/site1/) Thanks again! (to everyone) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251919#msg-251919 From nginx-forum at nginx.us Fri Jul 18 16:23:21 2014 From: nginx-forum at nginx.us (martyparish) Date: Fri, 18 Jul 2014 12:23:21 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> Okay, I finally got the mapped variable figured out! map $uri $site_folder { ~^/(?P[a-zA-Z]+)/.* $folder ; } Now, I just need to figure out how to use it in the location block??? This is what I am trying: location ^~ /$site_folder/ { #rewrite ^/$site_folder/(.*) /$1; } Is it okay to use the named variable in the location like that? It's not picking it up... If I use this: location ^~ / { return 301 $scheme://www.google.com?q=$site_folder; } It DOES redirect to google with the proper variable in there. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251921#msg-251921 From nginx-forum at nginx.us Fri Jul 18 17:12:44 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 18 Jul 2014 13:12:44 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> Enable debugging and check the logs, or add Lua and dump variables to see what value is doing what (this is how I debug a flow). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251923#msg-251923 From nginx-forum at nginx.us Fri Jul 18 17:21:47 2014 From: nginx-forum at nginx.us (martyparish) Date: Fri, 18 Jul 2014 13:21:47 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5dfc7d974a1fcda259b1030f71e929b9.NginxMailingListEnglish@forum.nginx.org> I will do that. I guess my main question is: can I use a variable in the location URI? location ^~ /$site_folder/ { It appears that the answer is no. I have confirmed the variable is set from the map block. However, when I use it in location like above, it gets skipped over. I appreciate the time you have spent with me! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251925#msg-251925 From nginx-forum at nginx.us Fri Jul 18 18:21:12 2014 From: nginx-forum at nginx.us (itpp2012) Date: Fri, 18 Jul 2014 14:21:12 -0400 Subject: Multiple sites under same domain with one app codebase In-Reply-To: <5dfc7d974a1fcda259b1030f71e929b9.NginxMailingListEnglish@forum.nginx.org> References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> <5dfc7d974a1fcda259b1030f71e929b9.NginxMailingListEnglish@forum.nginx.org> Message-ID: <8c67d326a83d0eebdedcab18c952ffbc.NginxMailingListEnglish@forum.nginx.org> Maybe you should set location as fixed generic and then use root to change where ever it needs to, don't forget to tell php where stuff is if you change root. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251928#msg-251928 From chigga101 at gmail.com Fri Jul 18 22:32:38 2014 From: chigga101 at gmail.com (Matthew Ngaha) Date: Fri, 18 Jul 2014 23:32:38 +0100 Subject: nginx reload, stop error Message-ID: Hey, when I run './nginx -s reload' or './nginx -s stop' i get this: nginx: [error] open() "/usr/local/nginx-1.4.3/logs/nginx.pid" failed (2: No such file or directory) Any ideas why it's trying to open this file that doesn't exist? From rpaprocki at fearnothingproductions.net Fri Jul 18 22:34:40 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 18 Jul 2014 15:34:40 -0700 Subject: nginx reload, stop error In-Reply-To: References: Message-ID: <53C9A100.2030408@fearnothingproductions.net> Where have you configured your pid file? Are you using a custom build, or a distributed package? On 07/18/2014 03:32 PM, Matthew Ngaha wrote: > Hey, when I run './nginx -s reload' or './nginx -s stop' i get this: > > nginx: [error] open() "/usr/local/nginx-1.4.3/logs/nginx.pid" failed > (2: No such file or directory) > > Any ideas why it's trying to open this file that doesn't exist? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > From shahzaib.cb at gmail.com Sat Jul 19 09:23:49 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 19 Jul 2014 14:23:49 +0500 Subject: Proxy_pass Directive !! Message-ID: I am confused about the proxy_pass directive. Suppose, i need to serve an mp4 file from Origin server and using proxy_pass directive in Edge server, whose resources(I/o,bandwidth,Ram) will be used ? Edge or Origin ? Following is the topology to server mp4 file :- client (request mp4 file) --> edge(ip is 1.2.3.4 so don't serve it locally and forward it to origin) --> origin (serves the requested mp4 file). Now, when origin serves that mp4 file, will the mp4 file first goes edge server and than client is served via edge proxy ? Also mp4 is a big file and i am curious to know whose HDD i/o will be used to serve this file , Edge/Origin ? Thanks Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat Jul 19 09:50:36 2014 From: francis at daoine.org (Francis Daly) Date: Sat, 19 Jul 2014 10:50:36 +0100 Subject: Proxy_pass Directive !! In-Reply-To: References: Message-ID: <20140719095036.GM16942@daoine.org> On Sat, Jul 19, 2014 at 02:23:49PM +0500, shahzaib shahzaib wrote: Hi there, > I am confused about the proxy_pass directive. Suppose, i need to serve an > mp4 file from Origin server and using proxy_pass directive in Edge server, > whose resources(I/o,bandwidth,Ram) will be used ? Edge or Origin ? Everyone's. > Following is the topology to server mp4 file :- > > client (request mp4 file) --> edge(ip is 1.2.3.4 so don't serve it locally > and forward it to origin) --> origin (serves the requested mp4 file). You have "client" (= web browser), which talks to "nginx" (= web server), which is configured to proxy_pass some requests to "upstream" (= other web server). client asks nginx for content, nginx sends content to client. So the full file goes from nginx to client. If nginx needs to proxy_pass, then before nginx sends content to client, nginx asks upstream for content, and upstream sends content to nginx. So the full file also goes from upstream to nginx. upstream never talks to client directly, in this topology. f -- Francis Daly francis at daoine.org From shahzaib.cb at gmail.com Sat Jul 19 09:56:05 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sat, 19 Jul 2014 14:56:05 +0500 Subject: Proxy_pass Directive !! In-Reply-To: <20140719095036.GM16942@daoine.org> References: <20140719095036.GM16942@daoine.org> Message-ID: >>If nginx needs to proxy_pass, then before nginx sends content to client, nginx asks upstream for content, and upstream sends content to nginx. So the full file also goes from upstream to nginx. Means, both server's i/o will be used if the requested file to upstream is 720p.mp4? -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiakai1000 at gmail.com Sat Jul 19 09:56:43 2014 From: jiakai1000 at gmail.com (=?GB2312?B?vNa/rQ==?=) Date: Sat, 19 Jul 2014 17:56:43 +0800 Subject: Why ngx_trylock do extra judgement? Message-ID: <53CA40DB.1080204@gmail.com> Hi there, function ngx_shmtx_lock: if (*mtx->lock == 0 && ngx_atomic_cmp_set(mtx->lock, 0, ngx_pid)) and ngx_trylock (ngx_atomic.h): (*(lock) == 0 && ngx_atomic_cmp_set(lock, 0, 1)) I think ngx_atomic_cmp_set is enough, why Nginx do extra judgement ahead of it ? From nginx-forum at nginx.us Sat Jul 19 14:27:32 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 19 Jul 2014 10:27:32 -0400 Subject: Proxy_pass Directive !! In-Reply-To: References: Message-ID: <4017e48ea87e491f0bfe35705869a443.NginxMailingListEnglish@forum.nginx.org> shahzaib1232 Wrote: ------------------------------------------------------- > Means, both server's i/o will be used if the requested file to > upstream is > 720p.mp4? Yes, what you're looking for is a way to client-rewrite the address the source is coming from. 1.2.3.4 -> request xx.mp4 -> edge 5.6.7.8 (I don't have that file) -> send client address of origin and tell client to re-initiate file request with origin address. How? don't ask me :) Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251933,251937#msg-251937 From nginx-forum at nginx.us Sat Jul 19 14:32:46 2014 From: nginx-forum at nginx.us (martyparish) Date: Sat, 19 Jul 2014 10:32:46 -0400 Subject: [SOLVED] Re: Multiple sites under same domain with one app codebase In-Reply-To: <8c67d326a83d0eebdedcab18c952ffbc.NginxMailingListEnglish@forum.nginx.org> References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> <5dfc7d974a1fcda259b1030f71e929b9.NginxMailingListEnglish@forum.nginx.org> <8c67d326a83d0eebdedcab18c952ffbc.NginxMailingListEnglish@forum.nginx.org> Message-ID: For anyone who comes across this scenario, here is how I got it working: # get the first folder name into a variable ($site_folder) map $uri $site_folder { ~^/(?P[a-zA-Z]+)/.* $folder ; } ... server { ... if (!-f $site_folder) { rewrite ^/[^/]+/(.*) /$1; } } I am aware of the "if is evil" thing but it states that it is safe outside of "location" block. Supposedly safe in "server" context. Performance seems good so far. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251938#msg-251938 From nginx-forum at nginx.us Sat Jul 19 14:34:35 2014 From: nginx-forum at nginx.us (martyparish) Date: Sat, 19 Jul 2014 10:34:35 -0400 Subject: [SOLVED] Re: Multiple sites under same domain with one app codebase In-Reply-To: References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> <5dfc7d974a1fcda259b1030f71e929b9.NginxMailingListEnglish@forum.nginx.org> <8c67d326a83d0eebdedcab18c952ffbc.NginxMailingListEnglish@forum.nginx.org> Message-ID: actually this part was wrong; if (!-f $site_folder) { rewrite ^/[^/]+/(.*) /$1; } needs to be: if (!-d /etc/nginx/html/production/$site_folder) { rewrite ^/[^/]+/(.*) /$1; } * changed -f to -d ** had to add root path before $site_folder Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251939#msg-251939 From me at myconan.net Sat Jul 19 14:40:36 2014 From: me at myconan.net (Edho Arief) Date: Sat, 19 Jul 2014 23:40:36 +0900 Subject: [SOLVED] Re: Multiple sites under same domain with one app codebase In-Reply-To: References: <20140718145140.GL16942@daoine.org> <1642b58d4cbc5f427d9afa2ebc3682be.NginxMailingListEnglish@forum.nginx.org> <7dc83de94ee31cd71d22e1dd92884756.NginxMailingListEnglish@forum.nginx.org> <9b931774a031c8ba654ae2d855276c2d.NginxMailingListEnglish@forum.nginx.org> <42d5f96c3f77cfab1d92b0c4b7fed372.NginxMailingListEnglish@forum.nginx.org> <5dfc7d974a1fcda259b1030f71e929b9.NginxMailingListEnglish@forum.nginx.org> <8c67d326a83d0eebdedcab18c952ffbc.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sat, Jul 19, 2014 at 11:34 PM, martyparish wrote: > > actually this part was wrong; > if (!-f $site_folder) { > rewrite ^/[^/]+/(.*) /$1; > } > > needs to be: > > if (!-d /etc/nginx/html/production/$site_folder) { > rewrite ^/[^/]+/(.*) /$1; > } > > * changed -f to -d > ** had to add root path before $site_folder > I wonder if this works: server { ... location ~ ^/.+(/.+) { try_files $uri $1; } } From nginx-forum at nginx.us Sat Jul 19 14:51:18 2014 From: nginx-forum at nginx.us (martyparish) Date: Sat, 19 Jul 2014 10:51:18 -0400 Subject: [SOLVED] Re: Multiple sites under same domain with one app codebase In-Reply-To: References: Message-ID: <766b29b773aad0eee55c3d9ab645dd57.NginxMailingListEnglish@forum.nginx.org> Unfortunately it did not. I was really hoping to do this with try_files instead of "if" and rewrite! Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251941#msg-251941 From me at myconan.net Sat Jul 19 14:55:25 2014 From: me at myconan.net (Edho Arief) Date: Sat, 19 Jul 2014 23:55:25 +0900 Subject: [SOLVED] Re: Multiple sites under same domain with one app codebase In-Reply-To: <766b29b773aad0eee55c3d9ab645dd57.NginxMailingListEnglish@forum.nginx.org> References: <766b29b773aad0eee55c3d9ab645dd57.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sat, Jul 19, 2014 at 11:51 PM, martyparish wrote: > Unfortunately it did not. I was really hoping to do this with try_files > instead of "if" and rewrite! > what error did you get From reallfqq-nginx at yahoo.fr Sat Jul 19 15:22:41 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Jul 2014 17:22:41 +0200 Subject: Proxy_pass Directive !! In-Reply-To: <4017e48ea87e491f0bfe35705869a443.NginxMailingListEnglish@forum.nginx.org> References: <4017e48ea87e491f0bfe35705869a443.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Sat, Jul 19, 2014 at 4:27 PM, itpp2012 wrote: > shahzaib1232 Wrote: > ------------------------------------------------------- > > Means, both server's i/o will be used if the requested file to > > upstream is > > 720p.mp4? > > Yes, what you're looking for is a way to client-rewrite the address the > source is coming from. > > 1.2.3.4 -> request xx.mp4 -> edge 5.6.7.8 (I don't have that file) -> send > client address of origin and tell client to re-initiate file request with > origin address. > > How? don't ask me :) > ? Is there a legal problem doing that? Why the smiley? An idea just popping out of my mind: - To redirect the client to the origin server, a 302 answer will do, won't it? - If you want to restrict access to the content server (and avoid clients to directly request content there by storing its address without having been redirected there by edge, for example when using load-balancing), *before* redirecting the client, on the origin server, you might record requests incoming from edge servers (trusted/whitelisted sources), pass the client IP address with a X-Forwarded-For (name might not be exact) header field and allow a time-limited session for that client. All that looks like a CDN :o) --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jul 19 15:57:38 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sat, 19 Jul 2014 11:57:38 -0400 Subject: Proxy_pass Directive !! In-Reply-To: References: Message-ID: <3c718ac752a026c928b7defe82d431aa.NginxMailingListEnglish@forum.nginx.org> B.R. Wrote: ------------------------------------------------------- > > How? don't ask me :) > > > ? > Is there a legal problem doing that? Why the smiley? No :) I'd do it simple by using HTTP-EQUIV="REFRESH" as a response with an origin address. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251933,251944#msg-251944 From reallfqq-nginx at yahoo.fr Sat Jul 19 17:26:09 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 19 Jul 2014 19:26:09 +0200 Subject: Proxy_pass Directive !! In-Reply-To: <3c718ac752a026c928b7defe82d431aa.NginxMailingListEnglish@forum.nginx.org> References: <3c718ac752a026c928b7defe82d431aa.NginxMailingListEnglish@forum.nginx.org> Message-ID: ?On Sat, Jul 19, 2014 at 5:57 PM, itpp2012 wrote: > No :) I'd do it simple by using HTTP-EQUIV="REFRESH" as a response with an > origin address. > ?Redirecting that way *might* work, although it looks a bit ugly to my eyes. It also seems to be not compliant to WCAG if you care about accessibility. Well, plain old HTML 4.01... Is it cross-compatible? I read stuff towards the contrary regarding Firefox. HTTP provides loads of ways to handle redirections in a standard fashion: 301, 302 or even 303, 307 if you know what you are doing.? Anyway, the person asking the question will pick his/her choice. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Sat Jul 19 17:47:39 2014 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sat, 19 Jul 2014 23:17:39 +0530 Subject: Proxy_pass Directive !! In-Reply-To: References: <20140719095036.GM16942@daoine.org> Message-ID: The Proxying server does not download the entire file ;save it to disk and then serve from that . The proxy simply buffers the content (which is config manageable) and serve the end user (browser ) .So the proxy will not face a high disk i/o load like the origin .. -- *Anoop P Alias* GNUSYS -------------- next part -------------- An HTML attachment was scrubbed... URL: From pchychi at gmail.com Sat Jul 19 19:15:49 2014 From: pchychi at gmail.com (Payam Chychi) Date: Sat, 19 Jul 2014 12:15:49 -0700 Subject: Proxy_pass Directive !! In-Reply-To: References: <20140719095036.GM16942@daoine.org> Message-ID: Use a resirect, keep it clean, simple, and compliant Why waste reaources when you dont have to? -- Payam Chychi Network Engineer / Security Specialist On Saturday, July 19, 2014 at 10:47 AM, Anoop Alias wrote: > > > > The Proxying server does not download the entire file ;save it to disk and then serve from that . > > The proxy simply buffers the content (which is config manageable) and serve the end user (browser ) .So the proxy will not face a high disk i/o load like the origin .. > > > > -- > Anoop P Alias > GNUSYS (http://gnusys.net) > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 20 09:16:34 2014 From: nginx-forum at nginx.us (taragano) Date: Sun, 20 Jul 2014 05:16:34 -0400 Subject: Nginx can not upload image to cloudinary (hosted in AWS) In-Reply-To: References: Message-ID: <63d80538f3094495532dd058857e5b9b.NginxMailingListEnglish@forum.nginx.org> This error comes from PHP, not Nginx. Also, it seems that the referred code is communicating with S3 and not with Cloudinary. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251881,251949#msg-251949 From shahzaib.cb at gmail.com Sun Jul 20 10:59:00 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Sun, 20 Jul 2014 15:59:00 +0500 Subject: Proxy_pass Directive !! In-Reply-To: References: <20140719095036.GM16942@daoine.org> Message-ID: >>1.2.3.4 -> request xx.mp4 -> edge 5.6.7.8 (I don't have that file) -> send client address of origin and tell client to re-initiate file request with origin address. @itpp, you're always the light of hope in darkness :-).Thats the exact solution i need, rewrite is not recommended in our solution because we're using View directive of bind where same domain test.com will resolve to edge as well as origin server on the basis of client's ip. So i cannot rewrite test.com back to test.com. You mentioned the solution HTTP-EQUIV="REFRESH" . Is that fine to use this method also, could you tell me how to use it with origin ip in nginx ? So client will resend the request to origin instead of edge. On Sun, Jul 20, 2014 at 12:15 AM, Payam Chychi wrote: > Use a resirect, keep it clean, simple, and compliant > > Why waste reaources when you dont have to? > > -- > Payam Chychi > Network Engineer / Security Specialist > > On Saturday, July 19, 2014 at 10:47 AM, Anoop Alias wrote: > > > > > The Proxying server does not download the entire file ;save it to disk and > then serve from that . > > The proxy simply buffers the content (which is config manageable) and > serve the end user (browser ) .So the proxy will not face a high disk i/o > load like the origin .. > > > > -- > *Anoop P Alias* > GNUSYS > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sun Jul 20 18:51:47 2014 From: nginx-forum at nginx.us (martyparish) Date: Sun, 20 Jul 2014 14:51:47 -0400 Subject: [SOLVED] Re: Multiple sites under same domain with one app codebase In-Reply-To: References: Message-ID: <5d71f3623cf65ce14dc2ab63a3fc4c80.NginxMailingListEnglish@forum.nginx.org> Sorry for the delay. It wasn't an error but the web pages were out of whack. Some of the files were now found. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251908,251952#msg-251952 From nginx-forum at nginx.us Sun Jul 20 18:54:07 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 20 Jul 2014 14:54:07 -0400 Subject: Proxy_pass Directive !! In-Reply-To: References: Message-ID: <783d14ef75de71a5d8da78e45cadffeb.NginxMailingListEnglish@forum.nginx.org> shahzaib1232 Wrote: ------------------------------------------------------- > rewrite test.com back to test.com. You mentioned the solution > HTTP-EQUIV="REFRESH" . Is that fine to use this method also, could you > tell > me how to use it with origin ip in nginx ? So client will resend the > request to origin instead of edge. http://en.wikipedia.org/wiki/Meta_refresh Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251933,251953#msg-251953 From sandro.dentella at gmail.com Mon Jul 21 07:36:02 2014 From: sandro.dentella at gmail.com (Sandro Dentella) Date: Mon, 21 Jul 2014 09:36:02 +0200 Subject: redirect /it to / Message-ID: Hi, I need to hide /it Query portion for SEO request in a django-cms setup. I tryed with:: location /it { if ($request_method = 'GET') { rewrite ^/it(/?.*) https://my.domain.it$1 ; } } location / { uwsgi_pass preprod; include /etc/nginx/proxy.conf; include /etc/nginx/uwsgi_params; } everything works corectly BUT authentication (that is a POST to /it/). My impression is that 'location /it' doesn't handle a POST and it doesn't fall back to the more general 'location /'. Is it correct? What it the std way to achieve a redirect just in case of a GET request? Thanks in advance sandro -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Mon Jul 21 09:20:39 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 21 Jul 2014 11:20:39 +0200 Subject: HEAD with mp4 runs in timeout Message-ID: <64e88e30089defde269a12b9010b6743@none.at> Hallo. When I make a GET request I get the file as expected, but the HEAD Request runs in a time out. curl -v 'http://vhost/cams/44/zoom.mp4?_=1405930346082' -X HEAD I use the following version. /home/nginx/server/sbin/nginx -V nginx version: nginx/1.7.3 built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) TLS SNI support enabled configure arguments: --prefix=/home/nginx/server --with-debug --without-http_uwsgi_module --without-http_scgi_module --without-http_empty_gif_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_ssl_module --user=nginx --group=www-data --with-file-aio --without-http_ssi_module --with-http_secure_link_module --with-http_sub_module --with-http_spdy_module --with-http_mp4_module on lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.4 LTS Release: 10.04 Codename: lucid Error log https://gist.github.com/anonymous/5c855e52877afacc515d The config part. location ~ \.mp4$ { mp4; } Could it be that the file is not 'HEAD' conform? Thanks for help. BR Aleks From arut at nginx.com Mon Jul 21 09:53:06 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 21 Jul 2014 13:53:06 +0400 Subject: HEAD with mp4 runs in timeout In-Reply-To: <64e88e30089defde269a12b9010b6743@none.at> References: <64e88e30089defde269a12b9010b6743@none.at> Message-ID: On 21 Jul 2014, at 13:20, Aleksandar Lazic wrote: > Hallo. > > When I make a GET request I get the file as expected, but the HEAD Request runs in a time out. > > curl -v 'http://vhost/cams/44/zoom.mp4?_=1405930346082' -X HEAD > > I use the following version. > > /home/nginx/server/sbin/nginx -V > nginx version: nginx/1.7.3 > built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5.1) > TLS SNI support enabled > configure arguments: --prefix=/home/nginx/server --with-debug --without-http_uwsgi_module --without-http_scgi_module --without-http_empty_gif_module --with-http_stub_status_module --with-http_gzip_static_module --with-http_ssl_module --user=nginx --group=www-data --with-file-aio --without-http_ssi_module --with-http_secure_link_module --with-http_sub_module --with-http_spdy_module --with-http_mp4_module > > on > > lsb_release -a > No LSB modules are available. > Distributor ID: Ubuntu > Description: Ubuntu 10.04.4 LTS > Release: 10.04 > Codename: lucid > > Error log > https://gist.github.com/anonymous/5c855e52877afacc515d > > The config part. > > location ~ \.mp4$ { > mp4; > } > > Could it be that the file is not 'HEAD' conform? > > Thanks for help. man curl: -X .. This option only changes the actual word used in the HTTP request, it does not alter the way curl behaves. So for example if you want to make a proper HEAD request, using -X HEAD will not suffice. You need to use the -I, --head option. From mdounin at mdounin.ru Mon Jul 21 11:58:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jul 2014 15:58:25 +0400 Subject: nginx reload, stop error In-Reply-To: References: Message-ID: <20140721115825.GO1849@mdounin.ru> Hello! On Fri, Jul 18, 2014 at 11:32:38PM +0100, Matthew Ngaha wrote: > Hey, when I run './nginx -s reload' or './nginx -s stop' i get this: > > nginx: [error] open() "/usr/local/nginx-1.4.3/logs/nginx.pid" failed > (2: No such file or directory) > > Any ideas why it's trying to open this file that doesn't exist? There are multiple possible reasons - e.g., this can happen because you've changed the configuration file, or if you are trying to stop wrong nginx. In general, it's good idea to use "kill" to control nginx on unix-like systems (not the "nginx -s" which is mostly for Windows and requires a configuration file to work). Use of "kill" allows to explicitly specify PID of the master process (and/or to get one from a pid file of your choice). See here for details on how to control nginx using signals: http://nginx.org/en/docs/control.html -- Maxim Dounin http://nginx.org/ From al-nginx at none.at Mon Jul 21 12:01:43 2014 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 21 Jul 2014 14:01:43 +0200 Subject: HEAD with mp4 runs in timeout In-Reply-To: References: <64e88e30089defde269a12b9010b6743@none.at> Message-ID: <8df27444a5ce9b2a9a3550e1e13405a3@none.at> Hi. Am 21-07-2014 11:53, schrieb Roman Arutyunyan: > On 21 Jul 2014, at 13:20, Aleksandar Lazic wrote: [snipp] > man curl: > > -X > .. > This option only changes the actual word used in the HTTP request, > it does not alter the way curl behaves. So for example if you > want to make a proper HEAD request, using -X HEAD will not suffice. > You need to use the -I, --head option. Ok thanks. what a shame. sorry for rush. cheers aleks From nginx-forum at nginx.us Mon Jul 21 15:15:00 2014 From: nginx-forum at nginx.us (gthb) Date: Mon, 21 Jul 2014 11:15:00 -0400 Subject: Memory use flares up sharply, how to troubleshoot? Message-ID: <9587532b760294f8e5c4f2be13ffebf8.NginxMailingListEnglish@forum.nginx.org> Hi, Several times recently, we have seen our production nginx memory usage flare up a hundred-fold, from its normal ~42 MB to 3-4 GB, for 20 minutes to an hour or so, and then recover. There is not a spike in number of connections, just memory use, so whatever causes this, it does not seem to be an increase in concurrency. The obvious thing to suspect for this is our app's newest change, which involves streaming responses proxied from an upstream (via uwsgi_pass); these responses can get very large and run for many minutes, pumping hundreds of megabytes each. But I expect nginx memory use for these requests to be bounded by uwsgi_buffers (shouldn't it be?) -- and indeed I cannot reproduce the problem by making such requests, even copying the exact ones that are being made when the memory spike occurs. In my tests, the responses get buffered as they should be, and delivered normally, without memory growth. So, what is a good way to investigate what causes all this memory to be suddenly allocated? Is there a way of introspecting/logging nginx memory allocation edge cases like this? (Is there documentation on this which I didn't find?) During the memory spike (and before and after), connections are around 270, of which around 20-25 are writing and the rest are waiting. Our nginx config has: worker_processes 2; events { worker_connections 1024; } http { uwsgi_buffers 64 8k; proxy_buffers 64 8k; } All other *_buffers settings are at their defaults, on a 64-bit machine. Accounting for the above buffers, other buffer defaults, and some uwsgi_cache and proxy_cache zones, and estimating on the high side (e.g. all buffer budgets fully used for each active connection) for 270 connections, here is my rough sketch of the memory uses I am aware of, probably with some misunderstandings along the way: 22 * keys_zone=10m for uwsgi_cache_path + proxy_cache_path = 220 MB uwsgi_buffers: 64 * 8kB * 270 = 135 MB proxy_buffers: 64 * 8kB * 270 = 135 MB gzip_buffers: 16 * 8kB * 270 = 34 MB output_buffers: 1 * 32kB * 270 = 8.6 MB large_client_header_buffers: 4 * 8kB * 270 = 8.6 MB ssl_buffer_size: 1 * 16k * 270 = 4.3 MB client_body_buffer_size: 1 * 16kB * 270 = 4.3 MB client_header_buffer_size: 1 * 1kB * 270 = 0.3 MB access_log buffers: 24 * 8kB + 1 * 32k = 0.2 MB This comes out to 546 MB tops ... what big memory use cases are missing in this picture? (And are these buffer size upper-bound estimates significantly incorrect?) This is in Nginx 1.6.0 on Linux 3.2.57 64-bit ... specifically: $ nginx -V nginx version: nginx/1.6.0 built by gcc 4.7.2 (Debian 4.7.2-5) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-http_spdy_module --with-cc-opt='-g -O2 -fstack-protector --param=ssp-buffer-size=4 -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2' --with-ld-opt='-Wl,-z,relro -Wl,--as-needed' --with-ipv6 $ uname -a Linux ren2 3.2.0-4-amd64 #1 SMP Debian 3.2.57-3+deb7u2 x86_64 GNU/Linux Thanks, best regards, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251964,251964#msg-251964 From mdounin at mdounin.ru Mon Jul 21 16:38:25 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jul 2014 20:38:25 +0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <9587532b760294f8e5c4f2be13ffebf8.NginxMailingListEnglish@forum.nginx.org> References: <9587532b760294f8e5c4f2be13ffebf8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140721163825.GR1849@mdounin.ru> Hello! On Mon, Jul 21, 2014 at 11:15:00AM -0400, gthb wrote: > Several times recently, we have seen our production nginx memory usage flare > up a hundred-fold, from its normal ~42 MB to 3-4 GB, for 20 minutes to an > hour or so, and then recover. There is not a spike in number of connections, > just memory use, so whatever causes this, it does not seem to be an increase > in concurrency. > > The obvious thing to suspect for this is our app's newest change, which > involves streaming responses proxied from an upstream (via uwsgi_pass); > these responses can get very large and run for many minutes, pumping > hundreds of megabytes each. But I expect nginx memory use for these requests > to be bounded by uwsgi_buffers (shouldn't it be?) -- and indeed I cannot > reproduce the problem by making such requests, even copying the exact ones > that are being made when the memory spike occurs. In my tests, the responses > get buffered as they should be, and delivered normally, without memory > growth. How do you track "nginx memory"? >From what you describe I suspect that disk buffering occurs (see http://nginx.org/r/uwsgi_max_temp_file_size), and the number you are looking at includes the size of files on disk. > So, what is a good way to investigate what causes all this memory to be > suddenly allocated? Is there a way of introspecting/logging nginx memory > allocation edge cases like this? (Is there documentation on this which I > didn't find?) The debuging log includes full information about all memory allocations, see http://nginx.org/en/docs/debugging_log.html. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Mon Jul 21 18:18:08 2014 From: nginx-forum at nginx.us (gthb) Date: Mon, 21 Jul 2014 14:18:08 -0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <20140721163825.GR1849@mdounin.ru> References: <20140721163825.GR1849@mdounin.ru> Message-ID: > How do you track "nginx memory"? What I was tracking was memory use per process name as reported by New Relic nrsysmond, which I'm pretty sure is RSS from ps output, summed over all nginx processes. > From what you describe I suspect that disk buffering occurs (see > http://nginx.org/r/uwsgi_max_temp_file_size), and the number you > are looking at includes the size of files on disk. I wish : ) because that's what I want to happen for these large responses. But that's definitely not it, because we see a spike of swap when this occurs, with most other processes on the machine being paged out ... and in the worst spikes swap has filled up and an OOM kill has occurred, which conveniently records in syslog the RSS for an nginx process being killed: Jul 21 03:54:16 ren2 kernel: [3929562.712779] uwsgi invoked oom-killer: gfp_mask=0x201da, order=0, oom_adj=0, oom_score_adj=0 ... Jul 21 03:54:16 ren2 kernel: [3929562.737340] Out of memory: Kill process 5248 (nginx) score 328 or sacrifice child Jul 21 03:54:16 ren2 kernel: [3929562.737352] Killed process 5248 (nginx) total-vm:3662860kB, anon-rss:3383776kB, file-rss:16kB So that catches Nginx holding a 3.2GB resident set, matching what New Relic says about the same time. > The debuging log includes full information about all memory > allocations, see http://nginx.org/en/docs/debugging_log.html. Thank you. I haven't been able to reproduce this outside of production (or even in production) so I might have to leave debug logging enabled in production and hope to catch this next time it happens. Am I right to assume that enabling debug is going to weigh quite heavily on production usage, and eat up disk very fast? (Traffic peaks at around 100 req/sec and 2 MB/sec.) Cheers, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251964,251967#msg-251967 From mdounin at mdounin.ru Mon Jul 21 19:32:28 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 21 Jul 2014 23:32:28 +0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: References: <20140721163825.GR1849@mdounin.ru> Message-ID: <20140721193228.GS1849@mdounin.ru> Hello! On Mon, Jul 21, 2014 at 02:18:08PM -0400, gthb wrote: [...] > > The debuging log includes full information about all memory > > allocations, see http://nginx.org/en/docs/debugging_log.html. > > Thank you. I haven't been able to reproduce this outside of production (or > even in production) so I might have to leave debug logging enabled in > production and hope to catch this next time it happens. Am I right to assume > that enabling debug is going to weigh quite heavily on production usage, and > eat up disk very fast? (Traffic peaks at around 100 req/sec and 2 MB/sec.) While debug logging is costly enough to notice, 100 r/s is quite low and it should be fine to just enable the logging. If in doubt, you may start with enabling debug only for a fraction of users using the debug_connection directive, see http://nginx.org/r/debug_connection. In any case, it may be a good idea to show full configuration as recommended at http://wiki.nginx.org/Debugging#Asking_for_help - mostly to make sure you haven't overlooked something obvious like "uwsgi_buffer_size 10m" or alike. -- Maxim Dounin http://nginx.org/ From GregWojtak at quickenloans.com Mon Jul 21 21:10:08 2014 From: GregWojtak at quickenloans.com (Wojtak, Greg) Date: Mon, 21 Jul 2014 21:10:08 +0000 Subject: 403 Forbidden + Permission Denied - Temporarily? Message-ID: I?m having a strange issue running nginx 1.4.5 and I?m hoping someone might have some ideas. We have a Symfony based app that we use Capistrano to deploy. The deploy keeps a few versions of the app around and rotates them off, so we?ll have two or three sitting out there. The active version is made active by using a symlink for the document root, so we have something like: /var/www/ourapp/20140721 /var/www/ourapp/20140720 /var/www/ourapp/20140717 /var/www/ourapp/current -> /var/www/ourapp/20140721 With ?current? being the document root specified in the server block. During the push, that symlink is removed and is set to use the new version. Also as part of that push, a php file is generated to clear the APC cache for php-fpm, and then called via curl from our build/deploy server. For a period of about 2 ? 5 minutes, any calls to this APC cache clear script or to another db health check script we have fail with a 403 and the php logs have Permission denied errors. I?ve checked the permissions on the file and on the all of the directories leading up to it and they are fine. After the 2 ? 5 minutes, it begins working consistently. We found an article on stack exchange that sounded very similar to our issue that involved turning off sendfile, however this did not appear to affect the behavior at all. We do not have any caching turned on. Does anyone have any idea what could be going on here? Thanks! Greg From nginx-forum at nginx.us Mon Jul 21 21:44:45 2014 From: nginx-forum at nginx.us (gthb) Date: Mon, 21 Jul 2014 17:44:45 -0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <20140721193228.GS1849@mdounin.ru> References: <20140721193228.GS1849@mdounin.ru> Message-ID: <1a83ad31bb771dd05bafb8e227386328.NginxMailingListEnglish@forum.nginx.org> Hi, I finally reproduced this, with debug logging enabled --- I found the problematic request in the error log preceding the kill signal, saying it was being buffered to a temporary file: 2014/07/21 11:39:39 [warn] 21182#0: *32332838 an upstream response is buffered to a temporary file /var/cache/nginx/uwsgi_temp/9/90/0000186909 while reading upstream, client: x.x.x.x, server: foo.com, request: "GET /api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1 HTTP/1.1", upstream: "uwsgi://123.45.67.89:3003", host: "foo.com" 2014/07/21 11:41:18 [alert] 16885#0: worker process 21182 exited on signal 9 and retrying that request reproduces the problem, nginx growing in size without bound. (The request never made it to the access log because of the OOM kill, which is why my previous testing didn't reproduce it) So here are debug log segments for single epoll events, during a healthy streaming request (one that doesn't trigger this problem), and during the problematic request. These requests *ought* to behave just the same (I'm not aware of any behavior difference in our upstream app, invoked via uwsgi_pass, except that the CSV lines are a little longer in the problematic response). Healthy request: epoll: fd:43 ev:0005 d:00000000023FF960 *1 http upstream request: "/api/well-behaved/request.csv?el=fnord!a=1.2:b=3.4.5:c&dates_as_dates=1" *1 http upstream process upstream *1 pipe read upstream: 1 *1 readv: 3:4096 *1 pipe recv chain: 44 *1 readv: 3:4096 *1 readv() not ready (11: Resource temporarily unavailable) *1 pipe recv chain: -2 *1 pipe buf free s:0 t:1 f:0 00000000020EED30, pos 00000000020EED30, size: 2267 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 000000000220B1C0, pos 000000000220B1C0, size: 0 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 0000000001FE5900, pos 0000000001FE5900, size: 0 file: 0, size: 0 *1 pipe length: -1 *1 pipe write downstream: 1 *1 pipe write busy: 0 *1 pipe write: out:0000000000000000, f:0 *1 pipe read upstream: 0 *1 pipe buf free s:0 t:1 f:0 00000000020EED30, pos 00000000020EED30, size: 2267 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 000000000220B1C0, pos 000000000220B1C0, size: 0 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 0000000001FE5900, pos 0000000001FE5900, size: 0 file: 0, size: 0 *1 pipe length: -1 *1 event timer: 43, old: 1405973335524, new: 1405973335560 *1 http upstream request: "/api/well-behaved/request.csv?el=fnord!a=1.2:b=3.4.5:c&dates_as_dates=1" *1 http upstream dummy handler timer delta: 0 posted events 0000000000000000 worker cycle accept mutex lock failed: 0 epoll timer: 500 For the problematic request, the epoll events *often* look identical (all the same kinds of lines, plus sometimes an extra pipe recv chain and readv line pair, presumably just because of data being streamed in slightly bigger chunks) ... but sometimes they have some extra lines, which I've highlighted with an XXX prefix here: epoll: fd:42 ev:0005 d:000000000150DE70 *1 http upstream request: "/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1" *1 http upstream process upstream *1 pipe read upstream: 1 *1 readv: 3:4096 *1 pipe recv chain: 519 XXX *1 input buf #135 *1 readv: 2:4096 *1 readv() not ready (11: Resource temporarily unavailable) *1 pipe recv chain: -2 XXX *1 pipe buf in s:1 t:1 f:0 0000000001417550, pos 0000000001417550, size: 8192 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 0000000001447610, pos 0000000001447610, size: 326 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 00000000014305C0, pos 00000000014305C0, size: 0 file: 0, size: 0 *1 pipe length: -1 *1 pipe write downstream: 1 *1 pipe write busy: 0 XXX *1 pipe write buf ls:1 0000000001417550 8192 XXX *1 pipe write: out:0000000001431878, f:0 XXX *1 http output filter "/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1" XXX *1 http copy filter: "/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1" XXX *1 http postpone filter "/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1" 0000000001431878 XXX *1 http chunk: 8192 XXX *1 write new buf t:1 f:0 0000000001431888, pos 0000000001431888, size: 6 file: 0, size: 0 XXX *1 write new buf t:1 f:0 0000000001417550, pos 0000000001417550, size: 8192 file: 0, size: 0 XXX *1 write new buf t:0 f:0 00000000014317E0, pos 00000000004ADFFD, size: 2 file: 0, size: 0 XXX *1 http write filter: l:0 f:1 s:8200 XXX *1 http write filter limit 0 XXX *1 writev: 8200 XXX *1 http write filter 0000000000000000 XXX *1 http copy filter: 0 "/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1" XXX *1 pipe write busy: 0 *1 pipe write: out:0000000000000000, f:0 *1 pipe read upstream: 0 *1 pipe buf free s:0 t:1 f:0 0000000001447610, pos 0000000001447610, size: 326 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 0000000001417550, pos 0000000001417550, size: 0 file: 0, size: 0 *1 pipe buf free s:0 t:1 f:0 00000000014305C0, pos 00000000014305C0, size: 0 file: 0, size: 0 *1 pipe length: -1 *1 event timer: 42, old: 1405971008168, new: 1405971008452 *1 http upstream request: "/api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1" *1 http upstream dummy handler timer delta: 1 posted events 0000000000000000 worker cycle accept mutex lock failed: 0 epoll timer: 500 These extra lines *never* appear in the healthy requests, so I imagine they point to the problem (but I am not at all familiar with Nginx debug output); in particular those "write new buf" lines look relevant; they are output right after ngx_alloc_chain_link is called. All the possibly relevant Nginx config: http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '...'; access_log /var/log/nginx/access.log main buffer=32k; sendfile on; keepalive_timeout 65; ssl_session_cache shared:SSL:10m; ssl_session_timeout 10m; gzip on; gzip_min_length 1280; gzip_types text/css application/json application/javascript text/csv text/xml text/turtle; log_format combined_upstream '...'; log_format internal_proxy '...'; uwsgi_hide_header X-...; uwsgi_buffers 64 8k; proxy_buffers 64 8k; include /etc/nginx/sites-enabled/*; } upstream fooapp.foo.com { server 123.45.67.89:3003 down; server 123.45.67.88:3003 down; server 123.45.67.87:3003; server 123.45.67.86:3003; least_conn; } server { listen 80; server_name foo.com; listen 443 ssl; ssl_certificate wildcard.foo.com.crt; ssl_certificate_key wildcard.foo.com.crt.key; ssl_protocols SSLv3 TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; set_real_ip_from 123.45.67.89/27; access_log /var/log/nginx/foo.com.access.log combined_upstream buffer=8k flush=2s; client_max_body_size 32m; ... location /api/ { include uwsgi_params; uwsgi_pass fooapp-foo.com; uwsgi_read_timeout 180; } } There are no other occurrences of _buffer anywhere in the config: $ sudo grep -r _buffer /etc/nginx/ /etc/nginx/nginx.conf: uwsgi_buffers 64 8k; /etc/nginx/nginx.conf: proxy_buffers 64 8k; Beginning of output of well-behaved request: HTTP/1.1 200 OK Server: nginx/1.6.0 Date: Mon, 21 Jul 2014 20:48:50 GMT Content-Type: text/csv; charset=iso-8859-1 Transfer-Encoding: chunked Connection: keep-alive Content-Disposition: attachment; filename=Foo-export.csv Content-Language: en Sex,Age group,ZIP code,Year-begin-date,Value Female,Under 5 years,00601,2012-01-01,572 Female,Under 5 years,00602,2012-01-01,1132 Female,Under 5 years,00603,2012-01-01,1589 Female,Under 5 years,00606,2012-01-01,189 Female,Under 5 years,00610,2012-01-01,784 ... Beginning of output of problematic request: HTTP/1.1 200 OK Server: nginx/1.6.0 Date: Mon, 21 Jul 2014 20:49:07 GMT Content-Type: text/csv; charset=iso-8859-1 Transfer-Encoding: chunked Connection: keep-alive Content-Disposition: attachment; filename=Foo-export.csv Content-Language: en Location,Measurement,Month-begin-date,Value SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-01-01,30.6 SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-02-01,35.5 SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-03-01,41.4 SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-04-01,41.6 SHARJAH INTER. AIRP (AE000041196),Temperature (max),2010-05-01,44.9 ... Does this narrow down the problem? Can I provide anything further? Cheers, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251964,251971#msg-251971 From miaohonghit at gmail.com Tue Jul 22 03:12:28 2014 From: miaohonghit at gmail.com (Harold.Miao) Date: Tue, 22 Jul 2014 11:12:28 +0800 Subject: nginx -s reload problem In-Reply-To: References: Message-ID: Do you have any good suggestions? Thanks in advance! Roman Arutyunyan ?2014?7?21??????? > Reload will not work properly with rtmp since rtmp connections are long > unlike http. > > > On Mon, Jun 16, 2014 at 11:19 AM, Harold.Miao > wrote: > >> hi all >> >> I use a endless rtmp stream >> >> /usr/local/nginx/wsgi/ffmpeg -i haha.mp4 -c:v libx264 -b:v 500k -c:a >> copy -f flv rtmp://172.16.205.50:1936/publish/you >> >> as you known, if I use nginx -s reload , then I got a lot of "nginx: >> worker process is shutting down" >> >> pplive 15355 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >> pplive 15356 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >> pplive 15357 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >> pplive 15358 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >> pplive 15359 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >> pplive 15360 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >> >> ? >> >> because the connection will not quit, so "nginx: worker process is >> shutting down" more and more >> >> so How to aviod this status using "nginx -s reload" >> >> >> -- >> >> Best Regards, >> Harold Miao >> >> -- >> You received this message because you are subscribed to the Google Groups >> "nginx-rtmp" group. >> To unsubscribe from this group and stop receiving emails from it, send an >> email to nginx-rtmp+unsubscribe at googlegroups.com >> >> . >> Visit this group at http://groups.google.com/group/nginx-rtmp. >> For more options, visit https://groups.google.com/d/optout. >> > > > > -- > -- > Roman Arutyunyan > -- Best Regards, Harold Miao -------------- next part -------------- An HTML attachment was scrubbed... URL: From miaohonghit at gmail.com Tue Jul 22 03:24:48 2014 From: miaohonghit at gmail.com (Harold.Miao) Date: Tue, 22 Jul 2014 11:24:48 +0800 Subject: Nginx + boringSSL In-Reply-To: References: Message-ID: Looks interesting :) Alex Hunsaker ?2014?7?14??????? > I've started playing around with boringssl with nginx. > > Mostly everything works except OCSP. Seems like either openssl 1.0.2 > which boringssl was forked from does not have it, or the boringssl > folk ripped it out. I have not investigated. > > Anyway, I'm please to report everything seems to work! > > -- > # first boringssl > git clone https://boringssl.googlesource.com/boringssl > cd boringssl > # for when building on openbsd, also enables -O2, boringssl is a debug > build by default > cat boringssl_openbsd.patch | patch -p1 -N -s > mkdir build && cd build && cmake ../ && cd .. > # setup stuff for nginx > mkdir -p .openssl/lib > ln -s include .openssl/ > cp build/crypto/libcrypto.a build/ssl/libssl.a .openssl/lib > > # now for nginx > tar xvzf nginx-1.6.0.tar.gz > cd nginx-1.6.0 > cat ../boringssl_nginx.patch | patch -p1 -N -s > ./configure --with-openssl=../boringssl ... > # update timestamp so nginx won't try to build openssl > touch ../boringssl/.openssl/include/ssl.h > make > -- Best Regards, Harold Miao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 22 08:12:06 2014 From: nginx-forum at nginx.us (sardes) Date: Tue, 22 Jul 2014 04:12:06 -0400 Subject: soran Message-ID: <5bd6fa5c5661295492cd5c702d5bcf91.NginxMailingListEnglish@forum.nginx.org> thank you http://www.soran.edu.iq Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251974,251974#msg-251974 From mdounin at mdounin.ru Tue Jul 22 11:53:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jul 2014 15:53:11 +0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <1a83ad31bb771dd05bafb8e227386328.NginxMailingListEnglish@forum.nginx.org> References: <20140721193228.GS1849@mdounin.ru> <1a83ad31bb771dd05bafb8e227386328.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140722115311.GT1849@mdounin.ru> Hello! On Mon, Jul 21, 2014 at 05:44:45PM -0400, gthb wrote: > Hi, > > I finally reproduced this, with debug logging enabled --- I found the > problematic request in the error log preceding the kill signal, saying it > was being buffered to a temporary file: > > 2014/07/21 11:39:39 [warn] 21182#0: *32332838 an upstream response is > buffered to a temporary file /var/cache/nginx/uwsgi_temp/9/90/0000186909 > while reading upstream, client: x.x.x.x, server: foo.com, request: "GET > /api/nasty/troublemaker.csv?el=xyzzy!a:b&dates_as_dates=1 HTTP/1.1", > upstream: "uwsgi://123.45.67.89:3003", host: "foo.com" > 2014/07/21 11:41:18 [alert] 16885#0: worker process 21182 exited on > signal 9 > > and retrying that request reproduces the problem, nginx growing in size > without bound. (The request never made it to the access log because of the > OOM kill, which is why my previous testing didn't reproduce it) [...] > These extra lines *never* appear in the healthy requests, so I imagine they > point to the problem (but I am not at all familiar with Nginx debug output); > in particular those "write new buf" lines look relevant; they are output > right after ngx_alloc_chain_link is called. The lines in question are just sending the response to the client via the response body filter chain. > All the possibly relevant Nginx config: I don't see anything obviously wrong in the config, but again - I strongly recommend you to post _full_ configuration. Or, better yet, if you are able to reproduce the problem with a reduced config, post it instead (and corresponding debug log). [...] > Does this narrow down the problem? Can I provide anything further? No. Please show the debug log. -- Maxim Dounin http://nginx.org/ From reallfqq-nginx at yahoo.fr Tue Jul 22 14:11:21 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 22 Jul 2014 16:11:21 +0200 Subject: nginx -s reload problem In-Reply-To: References: Message-ID: That worker states indicates that the old workers are still busy. Signaling nginx for a reload (through 'reload' service command or sending HUP to the master, which all does the latter), *graciously* shuts old workers down, that means leaving them finishing what they are doing without accepting new tasks. You have resources to learn how to control nginx ? ?.? So: your workers are busy. If you look at your network table, you will most probably find out that those processes are attached to existing connections. ?You may want to dig on why those workers are busy and the reason for the connections being held open (file transfer? connection? timeout not expired? request being processed?) --- *B. R.* On Tue, Jul 22, 2014 at 5:12 AM, Harold.Miao wrote: > Do you have any good suggestions? > > Thanks in advance! > > Roman Arutyunyan ?2014?7?21??????? > >> Reload will not work properly with rtmp since rtmp connections are long >> unlike http. >> >> >> On Mon, Jun 16, 2014 at 11:19 AM, Harold.Miao >> wrote: >> >>> hi all >>> >>> I use a endless rtmp stream >>> >>> /usr/local/nginx/wsgi/ffmpeg -i haha.mp4 -c:v libx264 -b:v 500k -c:a >>> copy -f flv rtmp://172.16.205.50:1936/publish/you >>> >>> as you known, if I use nginx -s reload , then I got a lot of "nginx: >>> worker process is shutting down" >>> >>> pplive 15355 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >>> pplive 15356 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >>> pplive 15357 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >>> pplive 15358 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >>> pplive 15359 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >>> pplive 15360 13642 0 14:56 ? 00:00:00 nginx: worker process is shutting down >>> >>> ? >>> >>> because the connection will not quit, so "nginx: worker process is >>> shutting down" more and more >>> >>> so How to aviod this status using "nginx -s reload" >>> >>> >>> -- >>> >>> Best Regards, >>> Harold Miao >>> >>> -- >>> You received this message because you are subscribed to the Google >>> Groups "nginx-rtmp" group. >>> To unsubscribe from this group and stop receiving emails from it, send >>> an email to nginx-rtmp+unsubscribe at googlegroups.com. >>> Visit this group at http://groups.google.com/group/nginx-rtmp. >>> For more options, visit https://groups.google.com/d/optout. >>> >> >> >> >> -- >> -- >> Roman Arutyunyan >> > > > -- > > Best Regards, > Harold Miao > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 22 14:51:43 2014 From: nginx-forum at nginx.us (gthb) Date: Tue, 22 Jul 2014 10:51:43 -0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <9587532b760294f8e5c4f2be13ffebf8.NginxMailingListEnglish@forum.nginx.org> References: <9587532b760294f8e5c4f2be13ffebf8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5b2c139771a28e89f163a264ad47b98d.NginxMailingListEnglish@forum.nginx.org> Hi, here's a minimal configuration where I can reproduce this: error_log debug.log debug; events { worker_connections 1024; } http { uwsgi_buffers 64 8k; upstream nginx-test.uwsgi { server 10.0.0.7:13003; least_conn; } server { listen 8080; server_name nginx-test.com; location /api/ { include uwsgi_params; uwsgi_pass nginx-test.uwsgi; } } } Here's a debug log covering server start, a single request that exhibits the problem, and server shutdown: http://filebin.ca/1UClE4zzhfZe/debug.log.gz Everything goes OK for a while, just a few stray mallocs, and then maybe half a minute into the request (the time varies), after maybe 20-25MB have been transferred, the flood of mallocs starts: $ grep malloc: debug.log | cut -d ' ' -f 2 | uniq -c 4 14:34:51 1 14:34:52 3 14:34:56 1 14:34:59 1 14:35:03 2 14:35:12 1216 14:35:27 1135 14:35:28 2144 14:35:29 1996 14:35:30 520 14:35:31 (That last second of mallocs is only smaller because I stopped the client, so the request was aborted) I hope that debug log turns up something informative --- and thank you again for your time on this. Cheers, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251964,251988#msg-251988 From mdounin at mdounin.ru Tue Jul 22 16:02:56 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jul 2014 20:02:56 +0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <5b2c139771a28e89f163a264ad47b98d.NginxMailingListEnglish@forum.nginx.org> References: <9587532b760294f8e5c4f2be13ffebf8.NginxMailingListEnglish@forum.nginx.org> <5b2c139771a28e89f163a264ad47b98d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140722160255.GY1849@mdounin.ru> Hello! On Tue, Jul 22, 2014 at 10:51:43AM -0400, gthb wrote: > Hi, > > here's a minimal configuration where I can reproduce this: > > error_log debug.log debug; > > events { > worker_connections 1024; > } > > http { > uwsgi_buffers 64 8k; > > upstream nginx-test.uwsgi { > server 10.0.0.7:13003; > least_conn; > } > > server { > listen 8080; > server_name nginx-test.com; > > location /api/ { > include uwsgi_params; > uwsgi_pass nginx-test.uwsgi; > } > } > } > > Here's a debug log covering server start, a single request that exhibits the > problem, and server shutdown: > > http://filebin.ca/1UClE4zzhfZe/debug.log.gz > > Everything goes OK for a while, just a few stray mallocs, and then maybe > half a minute into the request (the time varies), after maybe 20-25MB have > been transferred, the flood of mallocs starts: Ok, I see what goes on here. It is a combination of multiple factors: - there are more than 16 buffers and hence stack-based buffer for iovs isn't big enough and nginx have to allocate memory in ngx_readv_chain(); - your backend app returns data in very small chunks, thus there are many ngx_readv_chain() calls; - response is big, and hence the effect of the above is noticeable. Trivial workaround is to use "uwsgi_buffers 8 64k" instead. Or you may try the following patch: # HG changeset patch # User Maxim Dounin # Date 1406044801 -14400 # Tue Jul 22 20:00:01 2014 +0400 # Node ID 129a91bfb0565ab21a0f399688be148fe5e76a1e # Parent 0896d5cb6b3d9ba7d229863ac65cd1559b2c439a Avoid memory allocations in ngx_readv_chain(). diff --git a/src/os/unix/ngx_readv_chain.c b/src/os/unix/ngx_readv_chain.c --- a/src/os/unix/ngx_readv_chain.c +++ b/src/os/unix/ngx_readv_chain.c @@ -10,7 +10,11 @@ #include -#define NGX_IOVS 16 +#if (IOV_MAX > 64) +#define NGX_IOVS 64 +#else +#define NGX_IOVS IOV_MAX +#endif #if (NGX_HAVE_KQUEUE) @@ -71,7 +75,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx iov->iov_len += chain->buf->end - chain->buf->last; } else { - if (vec.nelts >= IOV_MAX) { + if (vec.nelts >= NGX_IOVS) { break; } @@ -200,7 +204,7 @@ ngx_readv_chain(ngx_connection_t *c, ngx iov->iov_len += chain->buf->end - chain->buf->last; } else { - if (vec.nelts >= IOV_MAX) { + if (vec.nelts >= NGX_IOVS) { break; } -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Tue Jul 22 17:07:58 2014 From: nginx-forum at nginx.us (gthb) Date: Tue, 22 Jul 2014 13:07:58 -0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: <20140722160255.GY1849@mdounin.ru> References: <20140722160255.GY1849@mdounin.ru> Message-ID: Hi, > Trivial workaround is to use "uwsgi_buffers 8 64k" instead. > Or you may try the following patch: Thank you! I tried the uwsgi_buffers workaround in production, and the patch in my reproduction setup, and indeed both seem to fix this problem; the request runs to completion with no memory growth. > - your backend app returns data in very small chunks, thus there > are many ngx_readv_chain() calls; That's a likely cause of high CPU usage in Nginx, right? It goes to 20% for this one request (without debug), the Python app taking the rest. My intuition was that joining chunks on the Python side would be much more expensive ... but those thousands of ngx_readv_chain() calls per second are quite costly too, I take it? Cheers, Gulli Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251964,251993#msg-251993 From mdounin at mdounin.ru Tue Jul 22 19:36:24 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jul 2014 23:36:24 +0400 Subject: Memory use flares up sharply, how to troubleshoot? In-Reply-To: References: <20140722160255.GY1849@mdounin.ru> Message-ID: <20140722193624.GA1849@mdounin.ru> Hello! On Tue, Jul 22, 2014 at 01:07:58PM -0400, gthb wrote: > > - your backend app returns data in very small chunks, thus there > > are many ngx_readv_chain() calls; > > That's a likely cause of high CPU usage in Nginx, right? It goes to 20% for > this one request (without debug), the Python app taking the rest. My > intuition was that joining chunks on the Python side would be much more > expensive ... but those thousands of ngx_readv_chain() calls per second are > quite costly too, I take it? Syscalls on Python side, small packets over the network (even local one), and syscalls on nginx side are all costly when compared to using a reasonably sized buffer on Python side. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 23 17:19:47 2014 From: nginx-forum at nginx.us (newnovice) Date: Wed, 23 Jul 2014 13:19:47 -0400 Subject: reverse ssl proxy - speed & jitter Message-ID: I am setting up a nginx reverse ssl proxy - I have a machine I can use with 2 E5-2650 CPU's and lots of RAM. I have nginx-1.6.0 + openssl-1.0.1h installed. I have taken into consideration most optimization suggestions out there and incorporated them. I will attach a copy of my config file. (optimizing first connection experience is good) With my testing just for handshake + connection setup with 2K cert it is taking 3.5ms on average. I see spikes in this time every 40 or so handshakes. I would like the 90+ percentile of the handshakes to not have any jitter/variance. testing method: time for i in {1..1000}; do httperf --hog --server localhost --port 443 --ssl --uri /nginx_ping --ssl-no-reuse --num-calls 1 --num-conns 1 --rate 1 | egrep "Connection time \[ms\]\: |Reply time \[ms\]\: " | awk {'print $5'} | xargs | tr -s " " ", " >> test.log; done; -if you think this methodology is not right - do let me know. I have looked at the tcpdumps and made sure a full handshake is happening and then a GET request is issued gives me: request-time, connect_time, response_time request_time = connect_time(ssl handshake + connection setup) + response_time. 1. I want to debug why there is jitter in the handshake time - i want the 90th, 95th, 99th, 99.9th percentiles to also be around 3.5ms. 2. I want to see if i can make nginx any faster to do handshake. what is the fastest you guys think this can happen 3. how can i profile nginx and proceed to make this faster all comments are welcome! thanks! not sure how to attach config: config details: 5 workers, worker_priority -10, timer_resolution 200ms, worker_cpu_affinity to separates cores on cpu2, error_log to dev/null, use epoll, worker_conns 2000, multi_accept on, accept_mutex off, sendfile on, tcp_nopush on, tcp_nodelay on, file caches, keepalive_timeout 5000, keepalive_requests 100000, reset_timedout_connection on, client_body_timeout 10, send_timeout 2, gzip, server_tokens off, postpone_output 0. upstream: keep alive 180, proxy_buffering off, client_body_buffer_size 512K, large_client_header_buffers 4 64k, client_max_body_size 0. server: listen 443 ssl, access_log off, ssl_buffer_size 8k, ssl_session_timeout 10m, ssl_protocols SSLv3 TLSv1, ssl_ciphers RC4-MD5, ssl_prefer_server_ciphers on, ssl_session_cache shared:SSL:10m. location /nginx_ping - return 200. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252002,252002#msg-252002 From nginx-forum at nginx.us Wed Jul 23 17:35:31 2014 From: nginx-forum at nginx.us (ericreiss) Date: Wed, 23 Jul 2014 13:35:31 -0400 Subject: Documentation / Beginner's Guide error Message-ID: <52ceee79774c8aa197a9f42082bb037d.NginxMailingListEnglish@forum.nginx.org> Just getting started with NGINX and reading the Beginner's Guide. The guide says to create two directories: /data/www /data/images Then goes on to tell you to setup two locations: location / { root /data/www; } and location /images/ { root /data; } Then says the results should look like this: server { location / { root /data/www; } location /images/ { root /data; } } Isn't the second location root path incorrect? It is missing the /images subfolder. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252003,252003#msg-252003 From nginx-forum at nginx.us Wed Jul 23 17:50:02 2014 From: nginx-forum at nginx.us (ericreiss) Date: Wed, 23 Jul 2014 13:50:02 -0400 Subject: Documentation / Beginner's Guide error In-Reply-To: <52ceee79774c8aa197a9f42082bb037d.NginxMailingListEnglish@forum.nginx.org> References: <52ceee79774c8aa197a9f42082bb037d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <38b327caa7f65dacdcf0b592ff87add6.NginxMailingListEnglish@forum.nginx.org> Never mind, I see what it is doing now. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252003,252004#msg-252004 From mdounin at mdounin.ru Wed Jul 23 17:52:27 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jul 2014 21:52:27 +0400 Subject: Documentation / Beginner's Guide error In-Reply-To: <52ceee79774c8aa197a9f42082bb037d.NginxMailingListEnglish@forum.nginx.org> References: <52ceee79774c8aa197a9f42082bb037d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140723175227.GK1849@mdounin.ru> Hello! On Wed, Jul 23, 2014 at 01:35:31PM -0400, ericreiss wrote: > Just getting started with NGINX and reading the Beginner's Guide. > > The guide says to create two directories: > /data/www > /data/images > > Then goes on to tell you to setup two locations: > location / { > root /data/www; > } > > and > > location /images/ { > root /data; > } > > Then says the results should look like this: > > server { > location / { > root /data/www; > } > > location /images/ { > root /data; > } > } > > Isn't the second location root path incorrect? It is missing the /images > subfolder. It is correct. The "root" directive specifies a path to be added to a request URI, and a request to "/images/foo" will map to "/images/foo". Hence "/data" is correct value for the root if you want to return "/data/images/foo" in a response. See http://nginx.org/r/root for details. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 23 18:00:36 2014 From: nginx-forum at nginx.us (newnovice) Date: Wed, 23 Jul 2014 14:00:36 -0400 Subject: reverse ssl proxy - speed & jitter In-Reply-To: References: Message-ID: Full Config: #user nobody; # This number should be, at maximum, the number of CPU cores on your system. # (since nginx doesn't benefit from more than one worker per CPU.) worker_processes 5; #give worker processes the priority (nice) you need/wish, it calls setpriority(). worker_priority -10; #decrease number gettimeofday() syscalls. By default gettimeofday() is called after each return from # kevent(), epoll, /dev/poll, select(), poll(). timer_resolution 200ms; #trying to set CPU affinity worker_cpu_affinity 10001 10010 10011 10100 10101; #error_log LOGFILE [debug_core | debug_alloc | debug_mutex | debug_event | debug_http | debug_imap], debug, crit, emerg; error_log /dev/null emerg; pid var/state/nginx.pid; # Number of file descriptors used for Nginx. This is set in the OS with 'ulimit -n 200000' or using /etc/security/limits.conf #worker_rlimit_nofile 60000; # workers_conns * 2 events { use epoll; worker_connections 20000; # Accept as many connections as possible, after nginx gets notification about a new connection. # May flood worker_connections, if that option is set too low. multi_accept on; accept_mutex off; } http { default_type application/octet-stream; log_format main '[$time_local] - [$time_iso8601] - [$request_time] - [$upstream_response_time] - $remote_addr ' #$proxy_add_x_forwarded_for_cip ' ' "$http_x_forwarded_for" - $remote_user - "$request" [$request_time] ' '$status - $request_length - $body_bytes_sent - "$http_referer" - ' '"$http_user_agent" - $uri - $request_method - ' '$ssl_protocol - $ssl_cipher'; sendfile on; # Tcp_nopush causes nginx to attempt to send its HTTP response head in one packet, # instead of using partial frames. This is useful for prepending headers before calling sendfile, # or for throughput optimization. tcp_nopush on; #the TCP_NODELAY option. The option is enabled only when a connection is transitioned into the keep-alive state. tcp_nodelay on; # Caches information about open FDs, freqently accessed files. # Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec. # I recommend using some varient of these options, though not the specific values listed below. open_file_cache max=200000 inactive=20s; open_file_cache_valid 30s; open_file_cache_min_uses 2; open_file_cache_errors on; open_log_file_cache max=100000 inactive=2m valid=10m min_uses=2; keepalive_timeout 5000; # Number of requests which can be made over a keep-alive connection. # Review and change it to a more suitable value if required. keepalive_requests 100000; # allow the server to close the connection after a client stops responding. Frees up socket-associated memory. reset_timedout_connection on; # send the client a "request timed out" if the body is not loaded by this time. Default 60. client_body_timeout 10; # If the client stops reading data, free up the stale client connection after this much time. Default 60. send_timeout 2; # Compression. Reduces the amount of data that needs to be transferred over the network gzip on; gzip_min_length 10240; gzip_proxied expired no-cache no-store private auth; gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; gzip_disable "MSIE [1-6]\."; client_body_temp_path var/state/nginx/client_body_temp 1 2; proxy_temp_path var/state/nginx/proxy_temp 1 2; fastcgi_temp_path var/state/nginx/fastcgi_temp 1 2; uwsgi_temp_path var/state/nginx/uwsgi_temp 1 2; scgi_temp_path var/state/nginx/scgi_temp_path 1 2; server_tokens off; postpone_output 0; upstream downstream_service { server 127.0.0.1:9999; keepalive 180; } # Turn off proxy buffering proxy_buffering off; proxy_buffer_size 128K; proxy_busy_buffers_size 128K; proxy_buffers 64 4K; client_body_buffer_size 512K; large_client_header_buffers 4 64k; limit_conn_zone $server_name zone=perserver1:32k; # Allow arbitrary size client posts client_max_body_size 0; # HTTPS Server config server { listen 443 ssl; # sndbuf=128k; server_name test-domain.com; # Buffer log writes to speed up IO, or disable them altogether access_log off; # turn off for better performance ssl_certificate /dev/shm/test-domain.com/cert; ssl_certificate_key /dev/shm/test-domain.com/key; # Do not overflow the SSL send buffer (causes extra round trips) ssl_buffer_size 8k; ssl_session_timeout 10m; ssl_protocols SSLv3 TLSv1; ssl_ciphers RC4-MD5; ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; set $host_header $host; if ($http_host != "") { set $host_header $http_host; } location / { proxy_pass http://downstream_service/; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host $host_header; proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for_cip; limit_conn perserver1 180; } # Nginx health check only to verify server is up and running location /nginx_ping { return 200; } } } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252002,252006#msg-252006 From vbart at nginx.com Thu Jul 24 09:21:10 2014 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 24 Jul 2014 13:21:10 +0400 Subject: reverse ssl proxy - speed & jitter In-Reply-To: References: Message-ID: <1466195.z08isJYjSb@vbart-workstation> On Wednesday 23 July 2014 14:00:36 newnovice wrote: > Full Config: > [..] You may get better results if you remove most of "optimizations" (as you have called that). Please don't collect bad advices. You shouldn't use any directive without full understanding what it's for according to official documentation (http://nginx.org/en/docs/), and what real problem you're trying to solve. Those spikes can be a result of "multi_accept on". > error_log /dev/null emerg; Everyone who added this line to config should be fired. > send_timeout 2; Mobile clients will blame you for this setting. wbr, Valentin V. Bartenev From nginx-forum at nginx.us Thu Jul 24 09:29:46 2014 From: nginx-forum at nginx.us (LucianaD) Date: Thu, 24 Jul 2014 05:29:46 -0400 Subject: nginx subdomains ajp module Message-ID: Hello, i tried to use nginx as a reverse proxy for tomcat and jboss application (that live on other servers) , using subdomains (eg. app1.domain.com, app2.domain.com etc), but, unfortunatelly, tomcat and jboss application have some problems with it (using subdomains there's no way to make jsp available) .. so I decided to try the ajp_module. Now, I've installed it and seems to work but I'm not able to configure subdomains for the apps. Any suggestion? Thanks Luciana Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252011,252011#msg-252011 From nginx-forum at nginx.us Thu Jul 24 09:34:45 2014 From: nginx-forum at nginx.us (mex) Date: Thu, 24 Jul 2014 05:34:45 -0400 Subject: nginx subdomains ajp module In-Reply-To: References: Message-ID: <34272d27e1453ecb4d108845e10d975b.NginxMailingListEnglish@forum.nginx.org> can you post your config please? beside this, is there a reason you stick to AJP-connector? iirc this is not a default-module for nginx, and on my testing i found the HTTP-connector as fast as AJP, but working kind of smoother for tomcat-appservers regards, mex Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252011,252013#msg-252013 From nginx-forum at nginx.us Thu Jul 24 10:09:14 2014 From: nginx-forum at nginx.us (LucianaD) Date: Thu, 24 Jul 2014 06:09:14 -0400 Subject: nginx subdomains ajp module In-Reply-To: <34272d27e1453ecb4d108845e10d975b.NginxMailingListEnglish@forum.nginx.org> References: <34272d27e1453ecb4d108845e10d975b.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi, of course! I've used the standard package (the one I install with apt-get on ubuntu server) and I made this configuration: upstream tomcat_server { server tomcat.domain.com:8080; } server{ listen 80; server_name app1.domain.com; location / { proxy_pass http://tomcat_server/app1/; sub_filter /app1/ /; } } to use subdomains, but the application didn't work properly: once I did login, the user session wasn't loaded , so I couldn't use it (after login, I saw alway the login page with some parts of other pages) , so someone adviced me to use ajp connector. I've installed again nginx without the use of apt-get and now my configuration is: worker_processes 1; #error_log logs/error.log; #error_log logs/error.log notice; #error_log logs/error.log info; #pid logs/nginx.pid; events { worker_connections 1024; } http { upstream tomcat { server tomcat.domain.com:8009; keepalive 20; } server { listen 80; server_name app1.domain.com; location / { ajp_keep_conn on; ajp_pass tomcat/app1/; } } } but I don't know how to configure it to have something like app1.domain.com and check if it works. With this config, when I go on app1.domain.com, I see the main page of tomcat server Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252011,252016#msg-252016 From nginx-forum at nginx.us Thu Jul 24 10:20:28 2014 From: nginx-forum at nginx.us (LucianaD) Date: Thu, 24 Jul 2014 06:20:28 -0400 Subject: nginx subdomains ajp module In-Reply-To: References: <34272d27e1453ecb4d108845e10d975b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <302596a86a116a5d9773325ce019fb23.NginxMailingListEnglish@forum.nginx.org> LucianaD Wrote: ------------------------------------------------------- > Hi, > of course! > I've used the standard package (the one I install with apt-get on > ubuntu server) and I made this configuration: > > upstream tomcat_server { > > server tomcat.domain.com:8080; > } > > server{ > listen 80; > server_name app1.domain.com; > location / { > > proxy_pass http://tomcat_server/app1/; > sub_filter /app1/ /; > } > } > > to use subdomains, but the application didn't work properly: once I > did login, the user session wasn't loaded , so I couldn't use it > (after login, I saw alway the login page with some parts of other > pages) , so someone adviced me to use ajp connector. > I've installed again nginx without the use of apt-get and now my > configuration is: > > worker_processes 1; > > #error_log logs/error.log; > #error_log logs/error.log notice; > #error_log logs/error.log info; > > #pid logs/nginx.pid; > > > events { > worker_connections 1024; > } > > > http { > upstream tomcat { > server tomcat.domain.com:8009; > keepalive 20; > > } > > server { > listen 80; > server_name app1.domain.com; > > location / { > ajp_keep_conn on; > ajp_pass tomcat/app1/; > } > } > } > > > but I don't know how to configure it to have something like > app1.domain.com and check if it works. > With this config, when I go on app1.domain.com, I see the main page of > tomcat server I have to add (I don't find the Edit button :) ) that I have to use subdomains for jboss application too and I have a problem with those apps .. in that case, with the use of subdomains I cannot see the login pages ever. I apologize for my english, I hope what I wrote would be understandable Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252011,252017#msg-252017 From nginx-forum at nginx.us Thu Jul 24 10:31:52 2014 From: nginx-forum at nginx.us (gwilym) Date: Thu, 24 Jul 2014 06:31:52 -0400 Subject: FastCGI caching and X-Accel-Redirect responses Message-ID: <8ed4c810a1c50e619ab4776e7b71c321.NginxMailingListEnglish@forum.nginx.org> I have Nginx sitting in front of a php5-fpm pool via fastcgi. The application will serve static assets by means of responding with an X-Accel-Redirect to an internal location. My intention is to avoid repeatedly hitting the app for a particular URL for short periods of time to cover bursts of traffic. My (incomplete) cache settings are like so, as an example of things I have tried: fastcgi_cache FASTCGI_ASSETS; fastcgi_cache_key $request_method$scheme$host$request_uri; fastcgi_cache_use_stale off; fastcgi_cache_valid any 10s; fastcgi_cache_methods GET HEAD; fastcgi_ignore_headers Expires Cache-Control Set-Cookie; I have a feeling that there is some special case when X-Accel-Redirect is present in the response which causes it to not be cached. I cannot ignore the field because I need to use it to service the redirect within the app server, but if I hit "/foo" resulting in the app responding with an accel-redirect, I would like "/foo" to not hit the app again for a certain period. As it stands no combination of fcgi cache settings seem to work for me. I can get non- accel-redirect responses cached OK otherwise. Any request with X-Accel-Redirect in the header appears as http cacheable 0 in the debug log. I know I can achieve this by putting a second caching server in front of the app server but it would save me a lot of hassle if the above were possible. Some of the assets themselves could also be large so attempting to cache the content could be too much, so caching the redirect (freeing up the time cost of the app producing it) would be a win for me. This is Nginx 1.6.0 on debian squeeze (dotdeb build). Thankyou. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252018,252018#msg-252018 From nginx-forum at nginx.us Thu Jul 24 12:51:01 2014 From: nginx-forum at nginx.us (espierre) Date: Thu, 24 Jul 2014 08:51:01 -0400 Subject: SYN_ACK issue as r-proxy with SSL and non SSL vhosts Message-ID: <2e0cd3636960a76982927587d3806a0b.NginxMailingListEnglish@forum.nginx.org> Hello, I have a reverse proxy serving both SSL and non SSL. When I serve a HTTPS host, SSL termination and forwarding works well. When I serve a HTTP host, forwarding is lost in missing SYN_ACK. I've added default backlog=1024 on every listen to no success. Any hint ? idea ? O.S.: raspberry with debian, cubieboard2 with cubian nginx: raspi: 1.2.1 / cubie: 1.6.0 http hosts behind: perl dancer, domoticz... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252020,252020#msg-252020 From arut at nginx.com Thu Jul 24 13:29:11 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 24 Jul 2014 17:29:11 +0400 Subject: FastCGI caching and X-Accel-Redirect responses In-Reply-To: <8ed4c810a1c50e619ab4776e7b71c321.NginxMailingListEnglish@forum.nginx.org> References: <8ed4c810a1c50e619ab4776e7b71c321.NginxMailingListEnglish@forum.nginx.org> Message-ID: <451A9880-A656-4BD3-9E8A-097928D227BD@nginx.com> On 24 Jul 2014, at 14:31, gwilym wrote: > I have Nginx sitting in front of a php5-fpm pool via fastcgi. The > application will serve static assets by means of responding with an > X-Accel-Redirect to an internal location. > > My intention is to avoid repeatedly hitting the app for a particular URL for > short periods of time to cover bursts of traffic. > > My (incomplete) cache settings are like so, as an example of things I have > tried: > > fastcgi_cache FASTCGI_ASSETS; > fastcgi_cache_key $request_method$scheme$host$request_uri; > fastcgi_cache_use_stale off; > fastcgi_cache_valid any 10s; > fastcgi_cache_methods GET HEAD; > fastcgi_ignore_headers Expires Cache-Control Set-Cookie; > > I have a feeling that there is some special case when X-Accel-Redirect is > present in the response which causes it to not be cached. I cannot ignore > the field because I need to use it to service the redirect within the app > server, but if I hit "/foo" resulting in the app responding with an > accel-redirect, I would like "/foo" to not hit the app again for a certain > period. > > As it stands no combination of fcgi cache settings seem to work for me. I > can get non- accel-redirect responses cached OK otherwise. Any request with > X-Accel-Redirect in the header appears as http cacheable 0 in the debug > log. > > I know I can achieve this by putting a second caching server in front of the > app server but it would save me a lot of hassle if the above were possible. > Some of the assets themselves could also be large so attempting to cache the > content could be too much, so caching the redirect (freeing up the time cost > of the app producing it) would be a win for me. Backend responses with X-Accel-Redirect are not cached unless this header is ignored. From nginx-forum at nginx.us Thu Jul 24 14:09:12 2014 From: nginx-forum at nginx.us (gzchenym) Date: Thu, 24 Jul 2014 10:09:12 -0400 Subject: Is calling ngx_http_discard_request_body() still necessary in modules only handle HTTP GETs ? Message-ID: Hi all: In nginx's native memcached module, I found that ngx_http_discard_request_body() was called right after a statement that only allow GET/HEAD requests to pass through. For ref: src/http/modules/ngx_http_memcached_module.c if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { return NGX_HTTP_NOT_ALLOWED; } rc = ngx_http_discard_request_body(r); However, in other places, such as (ngx_http_empty_gif_module) ,that ngx_http_discard_request_body() function call was just missed. Usually HTTP GET reqeust should not have request body. So, what's the point of having ngx_http_discard_request_body() in memcached module? and which pattern should I use? Regard YM Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252022,252022#msg-252022 From nginx-forum at nginx.us Thu Jul 24 16:36:51 2014 From: nginx-forum at nginx.us (newnovice) Date: Thu, 24 Jul 2014 12:36:51 -0400 Subject: reverse ssl proxy - speed & jitter In-Reply-To: <1466195.z08isJYjSb@vbart-workstation> References: <1466195.z08isJYjSb@vbart-workstation> Message-ID: <496fda83c19c7136d2322441437ead63.NginxMailingListEnglish@forum.nginx.org> What is the fastest SSL connection setup time that anyone can achieve? and also how do i reduce the jitter/variance.. so what settings should I take out & what shud i have - do you have an example of an optimal config? the logs are turned off to see first if this is even a viable option, i can turn up debug and check stuff if needed. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252002,252023#msg-252023 From nginx-forum at nginx.us Thu Jul 24 16:46:35 2014 From: nginx-forum at nginx.us (newnovice) Date: Thu, 24 Jul 2014 12:46:35 -0400 Subject: reverse ssl proxy - speed & jitter In-Reply-To: <496fda83c19c7136d2322441437ead63.NginxMailingListEnglish@forum.nginx.org> References: <1466195.z08isJYjSb@vbart-workstation> <496fda83c19c7136d2322441437ead63.NginxMailingListEnglish@forum.nginx.org> Message-ID: this is for a very fast internal only API driven service and not serving webpages/static files/multimedia... Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252002,252024#msg-252024 From mdounin at mdounin.ru Thu Jul 24 18:02:48 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 24 Jul 2014 22:02:48 +0400 Subject: Is calling ngx_http_discard_request_body() still necessary in modules only handle HTTP GETs ? In-Reply-To: References: Message-ID: <20140724180248.GP1849@mdounin.ru> Hello! On Thu, Jul 24, 2014 at 10:09:12AM -0400, gzchenym wrote: > Hi all: > > In nginx's native memcached module, I found that > ngx_http_discard_request_body() was called right after a statement that only > allow GET/HEAD requests to pass through. > > For ref: src/http/modules/ngx_http_memcached_module.c > > > if (!(r->method & (NGX_HTTP_GET|NGX_HTTP_HEAD))) { > return NGX_HTTP_NOT_ALLOWED; > } > > rc = ngx_http_discard_request_body(r); > > > However, in other places, such as (ngx_http_empty_gif_module) ,that > ngx_http_discard_request_body() function call was just missed. Usually HTTP > GET reqeust should not have request body. So, what's the point of having > ngx_http_discard_request_body() in memcached module? and which pattern > should I use? The empty gif module doesn't call ngx_http_discard_request_body() directly because it uses ngx_http_send_response(), which will call it. (Previously, there was an explicit call - it was removed by the http://hg.nginx.org/nginx/rev/18f1cb12c6d7 changeset.) All content handlers should either read or discard request body, either directly or indirectly. If it's not done, it will result in problems if there will be a request with a body. -- Maxim Dounin http://nginx.org/ From lists at ruby-forum.com Fri Jul 25 07:38:18 2014 From: lists at ruby-forum.com (Sophie Moussion) Date: Fri, 25 Jul 2014 09:38:18 +0200 Subject: difficulty adding headers In-Reply-To: References: Message-ID: <8f7aae775fe9eba12758721b39b71234@ruby-forum.com> Ah, votre probl?me semble ?tre un peu compliqu?, je ne pourrais pas vous aider, d?sol? ---------------------------------------------------------------- -- Posted via http://www.ruby-forum.com/. From nginx-forum at nginx.us Fri Jul 25 07:51:36 2014 From: nginx-forum at nginx.us (sleepingstu) Date: Fri, 25 Jul 2014 03:51:36 -0400 Subject: Proxy buffering In-Reply-To: <20140509040751.GQ1849@mdounin.ru> References: <20140509040751.GQ1849@mdounin.ru> Message-ID: <944354074cf6a287ff325d9e66819b3c.NginxMailingListEnglish@forum.nginx.org> Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Thu, May 08, 2014 at 04:45:18AM -0400, JSurf wrote: > > > > I'll plan to work on this and related problems at the start of > > > next year. > > > > > > > Hi, is this still somewhere on the priority list ? > > Yes, it's still in the list. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Hi Maxim, Am sure you are fed up with people asking but I don't suppose you'd happen have a rough ETA of when you might expect this feature to be introduced? Sounds like this feature would help a lot of people, me included. Stu Posted at Nginx Forum: http://forum.nginx.org/read.php?2,244680,252035#msg-252035 From shahzaib.cb at gmail.com Fri Jul 25 10:23:44 2014 From: shahzaib.cb at gmail.com (shahzaib shahzaib) Date: Fri, 25 Jul 2014 15:23:44 +0500 Subject: Download full mp4 file with proxy_cache or proxy_store !! In-Reply-To: References: <257A5DF7-C3BA-423A-AF2D-B988C1F1EC7C@nginx.com> <205b533d40ac3218387894a3dce207ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: mp4 seeking is filling up disk rapidly on edge server using proxy_cache and also incoming bandwidth is always higher than outgoing bandwidth (nload). Maybe, people are seeking to mp4 files and full videos are getting download again and again. How can i manage the mp4 seeking on edge server ? will proxy_store resolve the issue ? I really need to find the solution. Btw, nginx version is 1.6 Regards. On Mon, Jun 23, 2014 at 11:06 PM, shahzaib shahzaib wrote: > >> You can use proxy_store with the mp4 module. > So, proxy_store is able to download whole mp4 file once and than server > that file locally without fetching each time from the origin if users seek > through the video ? > > > On Mon, Jun 23, 2014 at 7:43 PM, Roman Arutyunyan wrote: > >> >> On 23 Jun 2014, at 17:15, itpp2012 wrote: >> >> > Roman Arutyunyan Wrote: >> > ------------------------------------------------------- >> >> Moreover the mp4 module does not work over proxy cache. That means >> >> even if you fix the cache key issue >> >> mp4 seeking will not work. You need to have a local mp4 file to be >> >> able to seek mp4 like that. >> > >> > Hmm, what about a hack, if the file is cached keep a link to the cached >> file >> > and its original name, if the next request matches a cached file and its >> > original name and a seek is requested then pass the cache via its >> original >> > name to allow seeking on the local (but cached) file. >> >> You can use proxy_store with the mp4 module. >> >> Having a link to a nginx cache file is wrong since cache file has >> internal header and >> HTTP headers. Cached mp4 entry is not a valid mp4 meaning you can?t play >> it directly >> without stripping headers. >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Fri Jul 25 16:30:04 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 25 Jul 2014 09:30:04 -0700 Subject: [nginx] Is proxy_cache_valid required? Message-ID: <53D2860C.6090502@fearnothingproductions.net> Hello! I had trouble this morning setting up a basic cache with a proxy. Based on the proxy documentation and http://nginx.com/resources/admin-guide/caching/, I did not expect to have to set proxy_cache_valid; however, when this directive was not set anywhere, I saw no cache files written. My config file is as below: worker_processes 1; user freewaf freewaf; error_log logs/error.log debug; worker_rlimit_core 500M; working_directory /tmp; events { worker_connections 1024; } http { lua_package_path '/usr/local/openresty/lualib/fw/?.lua;;'; lua_shared_dict fw_shm 50m; lua_regex_match_limit 100000000; client_body_buffer_size 512k; client_max_body_size 2m; proxy_http_version 1.1; proxy_cache_path /fw/shm/cache levels=1:2 keys_zone=fw:32m; include conf.d/*.conf; } upstream upstream_2 { server 23.226.226.175 ; } server { server_name cryptobells.com www.cryptobells.com; access_log logs/cryptobells.com.access.log; error_log logs/cryptobells.com.error.log; client_max_body_size 2m; listen 80; proxy_cache fw; proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; location / { default_type text/html; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://upstream_2; } location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { expires 1d; proxy_set_header Host $host; proxy_set_header X-Forwarded-For $remote_addr; proxy_pass http://upstream_2; } } However, with the following commented out: proxy_cache_valid 200 302 60m; proxy_cache_valid 404 1m; No caching content was written. Debug logs don't show anything out of the ordinary (though I will post if you like); any thoughts on this? From braulio at eita.org.br Fri Jul 25 16:35:44 2014 From: braulio at eita.org.br (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Fri, 25 Jul 2014 13:35:44 -0300 Subject: [nginx] Is proxy_cache_valid required? In-Reply-To: <53D2860C.6090502@fearnothingproductions.net> References: <53D2860C.6090502@fearnothingproductions.net> Message-ID: Interesting question... I also don't see the need of that directive On Fri, Jul 25, 2014 at 1:30 PM, Robert Paprocki < rpaprocki at fearnothingproductions.net> wrote: > Hello! > > I had trouble this morning setting up a basic cache with a proxy. Based > on the proxy documentation and > http://nginx.com/resources/admin-guide/caching/, I did not expect to > have to set proxy_cache_valid; however, when this directive was not set > anywhere, I saw no cache files written. > > My config file is as below: > > worker_processes 1; > user freewaf freewaf; > error_log logs/error.log debug; > worker_rlimit_core 500M; > working_directory /tmp; > > events { > worker_connections 1024; > } > > http { > lua_package_path '/usr/local/openresty/lualib/fw/?.lua;;'; > lua_shared_dict fw_shm 50m; > lua_regex_match_limit 100000000; > > client_body_buffer_size 512k; > client_max_body_size 2m; > proxy_http_version 1.1; > > proxy_cache_path /fw/shm/cache levels=1:2 keys_zone=fw:32m; > > include conf.d/*.conf; > } > > upstream upstream_2 { > server 23.226.226.175 ; > } > > server { > server_name cryptobells.com www.cryptobells.com; > access_log logs/cryptobells.com.access.log; > error_log logs/cryptobells.com.error.log; > client_max_body_size 2m; > listen 80; > proxy_cache fw; > proxy_cache_valid 200 302 60m; > proxy_cache_valid 404 1m; > > location / { > default_type text/html; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_pass http://upstream_2; > } > > location ~* \.(jpg|jpeg|gif|png|css|js|ico|xml)$ { > expires 1d; > proxy_set_header Host $host; > proxy_set_header X-Forwarded-For $remote_addr; > proxy_pass http://upstream_2; > } > } > > > > > However, with the following commented out: > > proxy_cache_valid 200 302 60m; > proxy_cache_valid 404 1m; > > No caching content was written. Debug logs don't show anything out of > the ordinary (though I will post if you like); any thoughts on this? > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua ideologia. Morra por sua ideologia" P.R. Sarkar EITA - Educa??o, Informa??o e Tecnologias para Autogest?o http://cirandas.net/brauliobo http://eita.org.br "Paramapurusha ? meu pai e Parama Prakriti ? minha m?e. O universo ? meu lar e todos n?s somos cidad?os deste cosmo. Este universo ? a imagina??o da Mente Macroc?smica, e todas as entidades est?o sendo criadas, preservadas e destru?das nas fases de extrovers?o e introvers?o do fluxo imaginativo c?smico. No ?mbito pessoal, quando uma pessoa imagina algo em sua mente, naquele momento, essa pessoa ? a ?nica propriet?ria daquilo que ela imagina, e ningu?m mais. Quando um ser humano criado mentalmente caminha por um milharal tamb?m imaginado, a pessoa imaginada n?o ? a propriedade desse milharal, pois ele pertence ao indiv?duo que o est? imaginando. Este universo foi criado na imagina??o de Brahma, a Entidade Suprema, por isso a propriedade deste universo ? de Brahma, e n?o dos microcosmos que tamb?m foram criados pela imagina??o de Brahma. Nenhuma propriedade deste mundo, mut?vel ou imut?vel, pertence a um indiv?duo em particular; tudo ? o patrim?nio comum de todos." Restante do texto em http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Jul 25 16:49:26 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 25 Jul 2014 20:49:26 +0400 Subject: [nginx] Is proxy_cache_valid required? In-Reply-To: <53D2860C.6090502@fearnothingproductions.net> References: <53D2860C.6090502@fearnothingproductions.net> Message-ID: <20140725164926.GY1849@mdounin.ru> Hello! On Fri, Jul 25, 2014 at 09:30:04AM -0700, Robert Paprocki wrote: > Hello! > > I had trouble this morning setting up a basic cache with a proxy. Based > on the proxy documentation and > http://nginx.com/resources/admin-guide/caching/, I did not expect to > have to set proxy_cache_valid; however, when this directive was not set > anywhere, I saw no cache files written. The proxy_cache_valid directives are needed if backend response doesn't indicate cacheability of the response with "Cache-Control: max-age=...", "Expires", or "X-Accel-Expires" (or if these headers are ignored using the "proxy_ignore_headers" directive). -- Maxim Dounin http://nginx.org/ From rpaprocki at fearnothingproductions.net Fri Jul 25 16:59:56 2014 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 25 Jul 2014 09:59:56 -0700 Subject: [nginx] Is proxy_cache_valid required? In-Reply-To: <20140725164926.GY1849@mdounin.ru> References: <53D2860C.6090502@fearnothingproductions.net> <20140725164926.GY1849@mdounin.ru> Message-ID: <53D28D0C.1030902@fearnothingproductions.net> Thanks, this was indeed the problem- I should have checked that first. Thank you as always Maxim! :D On 07/25/2014 09:49 AM, Maxim Dounin wrote: > Hello! > > On Fri, Jul 25, 2014 at 09:30:04AM -0700, Robert Paprocki wrote: > >> Hello! >> >> I had trouble this morning setting up a basic cache with a proxy. Based >> on the proxy documentation and >> http://nginx.com/resources/admin-guide/caching/, I did not expect to >> have to set proxy_cache_valid; however, when this directive was not set >> anywhere, I saw no cache files written. > > The proxy_cache_valid directives are needed if backend response > doesn't indicate cacheability of the response with "Cache-Control: > max-age=...", "Expires", or "X-Accel-Expires" (or if these headers > are ignored using the "proxy_ignore_headers" directive). > From nginx-forum at nginx.us Sat Jul 26 02:11:54 2014 From: nginx-forum at nginx.us (ishiber) Date: Fri, 25 Jul 2014 22:11:54 -0400 Subject: Nginx support for HLS Message-ID: Hi, I am going to deploy a streaming server and stream video using HLS and want to use Nginx. I have a few special requirements that I wanted to understand whether Nginx supports. It would be great if anyone knows whether Nginx could be configured to do the following over HLS: - Support multiple audio tracks for the same video so on one hand the viewer,prior to playing the video, could choose a soundtrack language (i.e. English, French etc.) and on the other hand I wont need to duplicate the video for each language. Also, if this is possible - do I need to embed all audio tracks into the same video file in advance or are they going to be separate files, combined on the fly? - Support DRM, so the video can be encrypted on the disk and "certificates" be generated on the fly to the viewers after they got identified? Thank you in advance, Ilan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252054,252054#msg-252054 From reallfqq-nginx at yahoo.fr Sat Jul 26 13:29:30 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sat, 26 Jul 2014 15:29:30 +0200 Subject: Nginx support for HLS In-Reply-To: References: Message-ID: On Sat, Jul 26, 2014 at 4:11 AM, ishiber wrote: > - Support multiple audio tracks for the same video so on one hand the > viewer,prior to playing the video, could choose a soundtrack language (i.e. > English, French etc.) and on the other hand I wont need to duplicate the > video for each language. Also, if this is possible - do I need to embed all > audio tracks into the same video file in advance or are they going to be > separate files, combined on the fly? > > - Support DRM, so the video can be encrypted on the disk and "certificates" > be generated on the fly to the viewers after they got identified? > ?Those are backend behavior. Why are you asking about all that on the nginx (ie webserver) ML? Keep in mind: reading 'DRM' could turn people away from helping you. Closed mind concepts = Closed mind help? ?= How much are you willing to pay for it? :oP It is strange to see people *removing the ability to share* videos using technology provided to them for *free* by the FOSS way of thinking... Btw I am sure that you would be considering Nginx Plus. :oD? --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Sat Jul 26 15:22:29 2014 From: nginx-forum at nginx.us (ishiber) Date: Sat, 26 Jul 2014 11:22:29 -0400 Subject: Nginx support for HLS In-Reply-To: References: Message-ID: <123b6801113ee5ecc35b9634aab46c6e.NginxMailingListEnglish@forum.nginx.org> Thanks *B. R.* for the info and clarification. Just to make it clear - I am not planning to protect any content, however, some of the files that I'm gonna stream might come from origins that require DRM so my options are either to choose a system that allows it or pass. Also, Nginx has modules that do HLS processing on the fly (e.g. generating playlists) so if the above is also available, it would be great. What would be the right ML for this discussion? Cheers, Ilan Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252054,252059#msg-252059 From nginx-forum at nginx.us Sun Jul 27 10:51:34 2014 From: nginx-forum at nginx.us (itpp2012) Date: Sun, 27 Jul 2014 06:51:34 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit Message-ID: 22:40 26-7-2014 nginx 1.7.4.2 WhiteRabbit "I'm late! I'm late! For a very important date! No time to say hello, goodbye! I'm late! I'm late! I'm late!" The nginx WhiteRabbit release is here! Based on nginx 1.7.4 (25-7-2014, last changeset 5771:c3b08217f2a2) with; + See Install_nginx_php_services.zip on site ! + set-misc-nginx-module v0.24 (upgraded 26-7-2014) + echo-nginx-module v0.54 (upgraded 19-7-2014) + lua-nginx-module v0.9.11 (upgraded 25-7-2014) + form-input-nginx-module v0.09 (upgraded 23-7-2014) + Source changes back ported + Source changes add-on's back ported + Changes for nginx_basic: Source changes back ported * Scheduled release: yes * Additional specifications: see 'Feature list' * This release is dedicated to our beloved Yorkshire terrier Peewee who aged 11,5 years passed away on Sunday July 20 at 15.15, we shall miss him dearly. Builds can be found here: http://nginx-win.ecsds.eu/ Follow releases https://twitter.com/nginx4Windows Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252064#msg-252064 From nginx-forum at nginx.us Sun Jul 27 11:37:01 2014 From: nginx-forum at nginx.us (husseingalal) Date: Sun, 27 Jul 2014 07:37:01 -0400 Subject: break flag in rewrite directive Message-ID: <0c47c65272893daacadbd7f5d49ba497.NginxMailingListEnglish@forum.nginx.org> Hi, I wanted to ask about break flag in rewrite directive as i understand Nginx does not initiate a new request for the modified URI but i cant understand how this will be useful? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252065,252065#msg-252065 From mdounin at mdounin.ru Sun Jul 27 16:54:01 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sun, 27 Jul 2014 20:54:01 +0400 Subject: break flag in rewrite directive In-Reply-To: <0c47c65272893daacadbd7f5d49ba497.NginxMailingListEnglish@forum.nginx.org> References: <0c47c65272893daacadbd7f5d49ba497.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20140727165400.GA1849@mdounin.ru> Hello! On Sun, Jul 27, 2014 at 07:37:01AM -0400, husseingalal wrote: > Hi, > I wanted to ask about break flag in rewrite directive as i understand Nginx > does not initiate a new request for the modified URI but i cant understand > how this will be useful? There is an example in docs, http://nginx.org/r/rewrite: location /download/ { rewrite ^(/download/.*)/media/(.*)\..*$ $1/mp3/$2.mp3 break; rewrite ^(/download/.*)/audio/(.*)\..*$ $1/mp3/$2.ra break; return 403; } The "break" flag is used here to change an URI, but keep processing in the already matched location. For example, a request to "/download/foo/media/bar.mp3" will be mapped to the "/root/download/foo/mp3/bar.mp3" file, where "/root" is the document root in the "/download/" location. -- Maxim Dounin http://nginx.org/ From arut at nginx.com Mon Jul 28 11:37:50 2014 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 28 Jul 2014 15:37:50 +0400 Subject: Nginx support for HLS In-Reply-To: References: Message-ID: Hi, On 26 Jul 2014, at 06:11, ishiber wrote: > I am going to deploy a streaming server and stream video using HLS and want > to use Nginx. I have a few special requirements that I wanted to understand > whether Nginx supports. > > It would be great if anyone knows whether Nginx could be configured to do > the following over HLS: > > - Support multiple audio tracks for the same video so on one hand the > viewer,prior to playing the video, could choose a soundtrack language (i.e. > English, French etc.) and on the other hand I wont need to duplicate the > video for each language. Also, if this is possible - do I need to embed all > audio tracks into the same video file in advance or are they going to be > separate files, combined on the fly? All audio/video tracks are copied from the mp4 to the HLS. So the viewer can choose any tracks at client side (if the client software can do that). If you want nginx to choose the tracks on certain criteria while generating HLS, that?s not supported now, but this feature is in the roadmap. > - Support DRM, so the video can be encrypted on the disk and "certificates" > be generated on the fly to the viewers after they got identified? DRM is not supported in the HLS module, but it?s in the roadmap as well. You can write to nginx-inquiries at nginx.com for more details and feature requests. From nginx-forum at nginx.us Mon Jul 28 13:26:40 2014 From: nginx-forum at nginx.us (jbochi) Date: Mon, 28 Jul 2014 09:26:40 -0400 Subject: Module Advice - Cassandra / Thrift In-Reply-To: <304B7C10FDD644DB85A9079CCD8EBE8F@Desktop> References: <304B7C10FDD644DB85A9079CCD8EBE8F@Desktop> Message-ID: <91d18e97fae65bd255fac3c8f33d5ad5.NginxMailingListEnglish@forum.nginx.org> Hi, I know this thread is already two and a half years old now, but if someone ends up here googling for a Cassandra module, I have implemented one in Lua using CQL binary protocol: https://github.com/jbochi/lua-resty-cassandra Cheers, Juarez Posted at Nginx Forum: http://forum.nginx.org/read.php?2,221807,252079#msg-252079 From nginx-forum at nginx.us Mon Jul 28 15:35:51 2014 From: nginx-forum at nginx.us (lorenanicole) Date: Mon, 28 Jul 2014 11:35:51 -0400 Subject: [nginx] Using User-Agent & IP address to rate limit Message-ID: Nginx novice here - after spending some time both here, reading through other community forums, and trial and error I'm looking for confirmation on my current Nginx config and/or suggestions on a better Nginx config. The end goal is to use both the IP address and User-Agent to rate limit requests being proxied to an external API. Currently the config sets zones with their respective rate limits and bursts using the IP address as the key. Inside the main location directive the User-Agent is read and based on the User-Agent the URI is rewritten to the location with the appropriate zone. http { include mime.types; default_type application/octet-stream; limit_req_zone $binary_remote_addr zone=one:10m rate=136r/s; limit_req_zone $binary_remote_addr zone=two:10m rate=150r/s; limit_req_zone $binary_remote_addr zone=three:10m rate=160r/s; limit_req_zone $binary_remote_addr zone=four:10m rate=30r/m; sendfile on; keepalive_timeout 65; server { listen 443; server_name localhost; ssl on; ssl_certificate /etc/nginx/ssl/nginx.crt; ssl_certificate_key /etc/nginx/ssl/nginx.key; ssl_session_timeout 5m; ssl_protocols SSLv2 SSLv3 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; proxy_ssl_session_reuse off; large_client_header_buffers 4 32K; location /java { limit_req zone=one burst=140; log_format java '$remote_addr - $remote_user [$time_local]' '"$request" | STATUS: $status | BODY BYTES: $body_bytes_sent |' '"$http_referer" "$http_user_agent"| GET PARAMS: $args | REQ BODY: $request_body'; access_log /var/log/nginx-access.log java; proxy_pass https://example.com/; } location /python { limit_req zone=two burst=140; #echo "You made it here with: " $request_body "and this: " $args "and this: " $uri "and this: " $1; log_format python '$remote_addr - $remote_user [$time_local]' '"$request" | STATUS: $status | BODY BYTES: $body_bytes_sent |' '"$http_referer" "$http_user_agent"| GET PARAMS: $args | REQ BODY: $request_body'; access_log /var/log/nginx-access.log python; proxy_pass https://example.com/; } location /etc { limit_req zone=four burst=1; log_format etc '$remote_addr - $remote_user [$time_local]' '"$request" | STATUS: $status | BODY BYTES: $body_bytes_sent |' '"$http_referer" "$http_user_agent"| GET PARAMS: $args | REQ BODY: $request_body'; access_log /var/log/nginx-access.log etc; proxy_pass https://example.com/; } location / { root html; index index.html index.htm; if ($http_user_agent = Java/1.6.0_65) { rewrite ^(.*)$ /java$uri last; } if ($http_user_agent = python) { rewrite ^(.*)$ /python$uri last; } if ($http_user_agent = "") { rewrite ^(.*)$ /etc$uri last; } } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } } The concern here is if there is a way to redirect the rewritten uri without having to break out and start processing the request again (argument last)? Additionally, is the setting of zone's using the IP address as the key the proper way to control these different rate limiting and burst thresholds? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252085,252085#msg-252085 From reallfqq-nginx at yahoo.fr Mon Jul 28 16:16:27 2014 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 28 Jul 2014 18:16:27 +0200 Subject: [nginx] Using User-Agent & IP address to rate limit In-Reply-To: References: Message-ID: On Mon, Jul 28, 2014 at 5:35 PM, lorenanicole wrote: > The concern here is if there is a way to redirect the rewritten uri without > having to break out and start processing the request again (argument last)? > Additionally, is the setting of zone's using the IP address as the key the > proper way to control these different rate limiting and burst thresholds? > I am no expert, but in my own eyes, I would have avoided using: 1?) A series of 'if' which behavior might be clumsy 2?) Redirections which kind of 'cut the flow' What I would have done: 1?) Defining all the log_format stuff at the http level: the sooner, the better 2?) Replacing the if series with a map (also in http) like the following: map $http_user_agent $language { Java/* java ... "" etc default none } 3?) Using the output of the first map as the input of two others to define zone and burst amount, like: map $language $zone { java one } map $language $burst { java 140 } 4?) Processing all the requests in the default location : location / { root html; index index.html; if($language != none) { limit_req zone=$zone burst=$burst; access_log /var/log/nginx/access.log $language; proxy_pass https://example.com; } ... } All that is not error-proof, coming straight outta my mind with no testing... but you get the general idea. --- *B. R.* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Mon Jul 28 16:43:14 2014 From: nginx-forum at nginx.us (lorenanicole) Date: Mon, 28 Jul 2014 12:43:14 -0400 Subject: [nginx] Using User-Agent & IP address to rate limit In-Reply-To: References: Message-ID: Thanks for the prompt feedback! Yes, the continuous if directives put my teeth on edge as well. Using a map block to introduce variables for $zone and $burst respectively, I tried this already and had continuous errors. Attempting this again (per your suggestions), I have this error -- nginx: [emerg] invalid burst rate "burst=$burst" in /usr/local/nginx/conf/nginx.conf:69 With line 69 as follows -- limit_req zone=$zone burst=$burst; Reading the limit_req directives (http://nginx.org/en/docs/http/ngx_http_limit_req_module.html) instructions there is no information specifying that you can use a variable to set these values. Likewise I do not believe you can use limit_req inside the context of an if directive (as you can with proxy_pass for example - http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass). My attempt to read off the value of $http_user_agent and set the limit_req inside an if block resulted in many "nginx: [emerg]" errors. My second thought now is to use nested locations after doing a rewrite on the $uri and inside these nested locations set the limit_req, log the info, and proxy_pass along. This way the request doesn't have to start over. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252085,252087#msg-252087 From nginx-forum at nginx.us Mon Jul 28 18:59:27 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Mon, 28 Jul 2014 14:59:27 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: References: Message-ID: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> Thanks itpp2012 love it <3 any plans for perl in your Nginx Builds ? http://wiki.nginx.org/Modules http://nginx.org/en/docs/http/ngx_http_perl_module.html Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252088#msg-252088 From nginx-forum at nginx.us Mon Jul 28 19:58:40 2014 From: nginx-forum at nginx.us (itpp2012) Date: Mon, 28 Jul 2014 15:58:40 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> It was on the todo list, but its not that simple, for example: http://stackoverflow.com/questions/20376990/perl-cgi-vs-fastcgi http://forums.iis.net/t/1107796.aspx?FastCGI+Perl Basically you can take any fcgi wrapper, adjust some minor stuff for windows but you'd have to rewrite the socket part into a tcp port, just like php works. Or use a simple apache/modperl as (loadbalanced) backend(s). Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252089#msg-252089 From believe.act.educate at gmail.com Mon Jul 28 23:01:06 2014 From: believe.act.educate at gmail.com (Alive-N-Thriving) Date: Mon, 28 Jul 2014 16:01:06 -0700 Subject: =?UTF-8?Q?Video_Reveals_3_Steps_for_Internet_Success=E2=80=A6?= Message-ID: <316B8CBA-5DEA-455D-BC6A-5E8A3362D7DE@gmail.com> Have you seen this video? I recently watched a video? where it?s revealed by a world class marketer? the EXACT 3 steps that every HUGE internet marketer must take if they want the life changing six-figure income. I was so taken back by this video that I had to share it. In a nutshell?the video shows you how to be able to earn HUGE COMMISSIONS with most of the more tedious hard work done for you!! You?ve got to watch to understand!! WATCH BY CLICKING HERE? = => http://LivetoThrive.bizbuildermastery.com/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From badalex at gmail.com Mon Jul 28 22:37:10 2014 From: badalex at gmail.com (Alex Hunsaker) Date: Mon, 28 Jul 2014 16:37:10 -0600 Subject: Nginx + boringSSL In-Reply-To: References: Message-ID: On Sun, Jul 13, 2014 at 7:58 PM, Alex Hunsaker wrote: > I've started playing around with boringssl with nginx. ... > Anyway, I'm please to report everything seems to work! Please find attached v2. Changes: - use for feature detection, its designed to more or less be comptaible with libressl, so I suspect this patch might work with libressl as well - fix depecreated use of RSA_generate_key(), the old patch just ripped out calling this function - report an error if you try to set ssl_engine if OPENSSL_NO_ENGINE or OPENSSL_NO_DYNAMIC_ENGINE, instead of just silently ignoring the directive. - include if OPENSSL_VERSION >= 1.0.2 -------------- next part -------------- A non-text attachment was scrubbed... Name: boringssl_nginx_v2.patch Type: text/x-patch Size: 2958 bytes Desc: not available URL: From believe.act.educate at gmail.com Tue Jul 29 01:09:55 2014 From: believe.act.educate at gmail.com (Alive-N-Thriving) Date: Mon, 28 Jul 2014 18:09:55 -0700 Subject: What is My Purpose? Message-ID: <6820C7AD-6782-4A7E-B432-B979C4BEA50E@gmail.com> Hey There, Have you ever wondered why so few - comparatively speaking - find success in business, wealth, Life! While so many others find hardship and struggle, living paycheck to paycheck, not knowing where rent will come from or how they'll feed their children? In the words of my church: "Me Too." I did something about it, and hundreds are helping me. Click The Links Living to Thrive Welcome to the World Not only has inequality been an ever present, reoccurring trend throughout history It exists in the face of every single person alive today. Yes you Americans The Rich get richer and poor are left to beg for the scraps. The Poor are also those that Fight And Die in our wars. The Elderly, Women, and Children also perish in those wars. Now days we have a "Middle Class." But what is this really? It looks like a "Cast," system, to me. Thats exactly what it is and I encourage everyone of you to wake up and take notice of the world around you. Life could not be more fragile than right now. Fragile!? Yes but so very strong at the same time. So connected and energetic. Life's force, working, flowing, transmitting, and vibrating inside all of Earth's life. That energy is meant to be used for good. The movie Avatar, when they connect to the planet. If you need a visual. Im not saying lay your head to the ground and try to win the lotto, or solve world hunger. What I'm saying is watch Chris Angel gather a crowd in Las Vegas and select a random person. He asks her to think of a friend that she hasn't spoken to in a long time. She does and Chris has the surrounding crowd channel their energy into the phone, thinking of the person she hasn't spoken to. The phone lay on the ground in the center of the group. In real time the phone lights up and is ringing with an incoming call from the long lost friend. The woman weeps in amazement as the friend on the other end says. "I had the overwhelming urge to call you just now, it's been forever sweetheart, how have you been? This is the power of collective consciousness, and channeled energy. There are those that are ahead of you in our journey towards unity and peace, search the terms: Indigo Children Light Workers China's Super Psychics Chinese Psychic Children. There are cases where children in China, "Super Psychics," are blind. Yet can describe the room, furniture, plant life, wallpaper, lighting conditions, carpet type, and anything else you could imagine about the room they're in, in complete detail, Here's some links: http://www.psychicchildren.co.uk/1-3-ChinasSuperPsychics.html http://www.abovetopsecret.com/forum/thread599753/pg1 http://www.spiritofmaat.com/archive/oct1/pdong.htm Here's some books: China's Super Psychics, By Paul Dong The Ancient Secret of the Flower of Life, Vol. 1 Chakras For Beginners, By David Pond. Indigo Adults: Understanding Who You Are And What You Can Become. By, Kabir Jaffe and Ritma Davidson Please Dive in and wake up! There is so much to do and so little time. If we continue letting other people make our decisions for us. WE WILL END UP IN THE WORLD THEY DECIDE TO CREATE. - It could not more simple or more treacherous. Visualize it if need be. They picture the color red in their minds and think it long and hard enough. Trust me, and so many others: "They have." End result: We get red. If we however, visualize "Blue," in a direct challenge to their "Red," in this life and on Earth "We - Blue." Outnumber the Red. Thinking together and willing a common peace and tranquility over the planet is 100% possible if we hit critical mass. That would require critical thinking on your part and - That is EXACTLY what has to happen for us to be effective. We hit critical mass and the Blues have it. Get the picture? Gooood. : ) I am a part of a world that wishes you the best. and is rooting for you to make a decision. We have the knowledge, and technology to build upstream, and to build our dreams. Leave a life legacy and world our children would be proud of and thank us for. We need the human power, to fuel the machine. Think about it, If war is often referred to as "A Machine," or as having, "Gears." Bodies, bullets, bombs, and hate are then referred to as the "Gears," of the, "Machine." Blood and oil get referred to as "Grease," for: "War Machine." Right!? Enough. The opposite is an Equally Opportune Possibility guys. The only reason you are having a hard time picturing this type of event or world and frankly "Me Too." <--Remember that--(I'm connecting you to a larger group) We have a hard time envisioning it because we've never seen it. Oh but I have! : ) And so have my Indigo brothren and sisters Everyone else has the opportunity to see it as well and we'er about to. Check out the links and read up my thirsty friends, YOU WERE NOT PUT ON THIS EARTH TO JUST GET FADED AND HAVE SEX YOU ARE MORE YOU ARE SPECIAL & THEY CAN NEVER TAKE OUR FREEDOM FOR IT EXISTS IN OUR MINDS THE WORLD WE CREATE THERE WILL MANIFEST HERE AND SO IT WILL BE LOVE ALIVE-N-THRIVING -------------- next part -------------- An HTML attachment was scrubbed... URL: From believe.act.educate at gmail.com Tue Jul 29 01:11:05 2014 From: believe.act.educate at gmail.com (Alive-N-Thriving) Date: Mon, 28 Jul 2014 18:11:05 -0700 Subject: Indigo Adults References: <6820C7AD-6782-4A7E-B432-B979C4BEA50E@gmail.com> Message-ID: <90A88CBE-57B1-4A4A-A654-7DB40E1B9565@gmail.com> Hey There, > Have you ever wondered why so few - comparatively speaking - > > find success in business, wealth, Life! While so many others find > > hardship and struggle, living paycheck to paycheck, not knowing where > > rent will come from or how they'll feed their children? > > In the words of my church: "Me Too." > > I did something about it, and hundreds are helping me. > > Click The Links > > Living to Thrive > > Welcome to the World > > > > Not only has inequality been an ever present, reoccurring trend throughout history > > It exists in the face of every single person alive today. > > > > Yes you Americans > > > > The Rich get richer and poor are left to beg for the scraps. > > The Poor are also those that Fight And Die in our wars. > > The Elderly, Women, and Children also perish in those wars. > > > > Now days we have a "Middle Class." But what is this really? > > It looks like a "Cast," system, to me. > > > > Thats exactly what it is and I encourage everyone of you to wake up and take notice of the world around you. > > Life could not be more fragile than right now. > > > > Fragile!? Yes but so very strong at the same time. > > > > So connected and energetic. > > > > Life's force, working, flowing, transmitting, and vibrating inside all of Earth's life. > > > > That energy is meant to be used for good. The movie Avatar, when they connect to the planet. > > If you need a visual. Im not saying lay your head to the ground and try to win the lotto, or solve world hunger. > > What I'm saying is watch Chris Angel gather a crowd in Las Vegas and select a random person. > > He asks her to think of a friend that she hasn't spoken to in a long time. > > She does and Chris has the surrounding crowd channel their energy into the phone, thinking of the person she hasn't spoken to. > > The phone lay on the ground in the center of the group. > > In real time the phone lights up and is ringing with an incoming call from the long lost friend. > > The woman weeps in amazement as the friend on the other end says. > > "I had the overwhelming urge to call you just now, it's been forever sweetheart, how have you been? > > This is the power of collective consciousness, and channeled energy. > > > > There are those that are ahead of you in our journey towards unity and peace, > > search the terms: > > > Indigo Children > Light Workers > China's Super Psychics > Chinese Psychic Children. > > > > There are cases where children in China, "Super Psychics," are blind. > > Yet can describe the room, furniture, plant life, wallpaper, lighting conditions, > > carpet type, and anything else you could imagine about the room they're in, in complete detail, > > > > Here's some links: > > http://www.psychicchildren.co.uk/1-3-ChinasSuperPsychics.html > > http://www.abovetopsecret.com/forum/thread599753/pg1 > > http://www.spiritofmaat.com/archive/oct1/pdong.htm > > > > Here's some books: > > China's Super Psychics, By Paul Dong > > The Ancient Secret of the Flower of Life, Vol. 1 > > Chakras For Beginners, By David Pond. > > Indigo Adults: Understanding Who You Are And What You Can Become. > > By, Kabir Jaffe and Ritma Davidson > > > > Please Dive in and wake up! There is so much to do and so little time. > > If we continue letting other people make our decisions for us. > > WE WILL END UP IN THE WORLD THEY DECIDE TO CREATE. - It could not more simple or more treacherous. > > > > Visualize it if need be. They picture the color red in their minds and think it long and hard enough. > > Trust me, and so many others: "They have." > > End result: > > We get red. > > > If we however, visualize "Blue," in a direct challenge to their "Red," in this life and on Earth "We - Blue." Outnumber the Red. > > Thinking together and willing a common peace and tranquility over the planet is 100% possible if we hit critical mass. > > That would require critical thinking on your part and - > > That is EXACTLY what has to happen for us to be effective. > > We hit critical mass and the Blues have it. > > > > Get the picture? Gooood. : ) > > > > I am a part of a world that wishes you the best. and is rooting for you to make a decision. > > We have the knowledge, and technology to build upstream, and to build our dreams. > > Leave a life legacy and world our children would be proud of and thank us for. > > > > We need the human power, to fuel the machine. > > Think about it, > > If war is often referred to as "A Machine," or as having, "Gears." > > Bodies, bullets, bombs, and hate are then referred to as the "Gears," of the, "Machine." > > Blood and oil get referred to as "Grease," for: "War Machine." > > Right!? > > Enough. > > > > The opposite is an Equally Opportune Possibility guys. > > The only reason you are having a hard time picturing this type of event or world > > and frankly "Me Too." <--Remember that--(I'm connecting you to a larger group) > > > > We have a hard time envisioning it because we've never seen it. > > Oh but I have! : ) And so have my Indigo brothren and sisters > > Everyone else has the opportunity to see it as well and we'er about to. > > > > Check out the links and read up my thirsty friends, > > YOU WERE NOT PUT ON THIS EARTH TO JUST GET FADED AND HAVE SEX > > YOU ARE MORE > > YOU ARE SPECIAL > > & > > THEY CAN NEVER TAKE OUR FREEDOM FOR IT EXISTS IN OUR MINDS > > THE WORLD WE CREATE THERE WILL MANIFEST HERE > > AND SO IT WILL BE > > > > LOVE > > ALIVE-N-THRIVING > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From believe.act.educate at gmail.com Tue Jul 29 01:12:02 2014 From: believe.act.educate at gmail.com (Alive-N-Thriving) Date: Mon, 28 Jul 2014 18:12:02 -0700 Subject: Serving A Purpose and Understanding What We Can Grow References: <6820C7AD-6782-4A7E-B432-B979C4BEA50E@gmail.com> Message-ID: <5B1B20E1-E380-43D0-B370-13080C8EADB5@gmail.com> > Hey There, > Have you ever wondered why so few - comparatively speaking - > > find success in business, wealth, Life! While so many others find > > hardship and struggle, living paycheck to paycheck, not knowing where > > rent will come from or how they'll feed their children? > > In the words of my church: "Me Too." > > I did something about it, and hundreds are helping me. > > Click The Links > > Living to Thrive > > Welcome to the World > > > > Not only has inequality been an ever present, reoccurring trend throughout history > > It exists in the face of every single person alive today. > > > > Yes you Americans > > > > The Rich get richer and poor are left to beg for the scraps. > > The Poor are also those that Fight And Die in our wars. > > The Elderly, Women, and Children also perish in those wars. > > > > Now days we have a "Middle Class." But what is this really? > > It looks like a "Cast," system, to me. > > > > Thats exactly what it is and I encourage everyone of you to wake up and take notice of the world around you. > > Life could not be more fragile than right now. > > > > Fragile!? Yes but so very strong at the same time. > > > > So connected and energetic. > > > > Life's force, working, flowing, transmitting, and vibrating inside all of Earth's life. > > > > That energy is meant to be used for good. The movie Avatar, when they connect to the planet. > > If you need a visual. Im not saying lay your head to the ground and try to win the lotto, or solve world hunger. > > What I'm saying is watch Chris Angel gather a crowd in Las Vegas and select a random person. > > He asks her to think of a friend that she hasn't spoken to in a long time. > > She does and Chris has the surrounding crowd channel their energy into the phone, thinking of the person she hasn't spoken to. > > The phone lay on the ground in the center of the group. > > In real time the phone lights up and is ringing with an incoming call from the long lost friend. > > The woman weeps in amazement as the friend on the other end says. > > "I had the overwhelming urge to call you just now, it's been forever sweetheart, how have you been? > > This is the power of collective consciousness, and channeled energy. > > > > There are those that are ahead of you in our journey towards unity and peace, > > search the terms: > > > Indigo Children > Light Workers > China's Super Psychics > Chinese Psychic Children. > > > > There are cases where children in China, "Super Psychics," are blind. > > Yet can describe the room, furniture, plant life, wallpaper, lighting conditions, > > carpet type, and anything else you could imagine about the room they're in, in complete detail, > > > > Here's some links: > > http://www.psychicchildren.co.uk/1-3-ChinasSuperPsychics.html > > http://www.abovetopsecret.com/forum/thread599753/pg1 > > http://www.spiritofmaat.com/archive/oct1/pdong.htm > > > > Here's some books: > > China's Super Psychics, By Paul Dong > > The Ancient Secret of the Flower of Life, Vol. 1 > > Chakras For Beginners, By David Pond. > > Indigo Adults: Understanding Who You Are And What You Can Become. > > By, Kabir Jaffe and Ritma Davidson > > > > Please Dive in and wake up! There is so much to do and so little time. > > If we continue letting other people make our decisions for us. > > WE WILL END UP IN THE WORLD THEY DECIDE TO CREATE. - It could not more simple or more treacherous. > > > > Visualize it if need be. They picture the color red in their minds and think it long and hard enough. > > Trust me, and so many others: "They have." > > End result: > > We get red. > > > If we however, visualize "Blue," in a direct challenge to their "Red," in this life and on Earth "We - Blue." Outnumber the Red. > > Thinking together and willing a common peace and tranquility over the planet is 100% possible if we hit critical mass. > > That would require critical thinking on your part and - > > That is EXACTLY what has to happen for us to be effective. > > We hit critical mass and the Blues have it. > > > > Get the picture? Gooood. : ) > > > > I am a part of a world that wishes you the best. and is rooting for you to make a decision. > > We have the knowledge, and technology to build upstream, and to build our dreams. > > Leave a life legacy and world our children would be proud of and thank us for. > > > > We need the human power, to fuel the machine. > > Think about it, > > If war is often referred to as "A Machine," or as having, "Gears." > > Bodies, bullets, bombs, and hate are then referred to as the "Gears," of the, "Machine." > > Blood and oil get referred to as "Grease," for: "War Machine." > > Right!? > > Enough. > > > > The opposite is an Equally Opportune Possibility guys. > > The only reason you are having a hard time picturing this type of event or world > > and frankly "Me Too." <--Remember that--(I'm connecting you to a larger group) > > > > We have a hard time envisioning it because we've never seen it. > > Oh but I have! : ) And so have my Indigo brothren and sisters > > Everyone else has the opportunity to see it as well and we'er about to. > > > > Check out the links and read up my thirsty friends, > > YOU WERE NOT PUT ON THIS EARTH TO JUST GET FADED AND HAVE SEX > > YOU ARE MORE > > YOU ARE SPECIAL > > & > > THEY CAN NEVER TAKE OUR FREEDOM FOR IT EXISTS IN OUR MINDS > > THE WORLD WE CREATE THERE WILL MANIFEST HERE > > AND SO IT WILL BE > > > > LOVE > > ALIVE-N-THRIVING > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at nginx.us Tue Jul 29 09:52:08 2014 From: nginx-forum at nginx.us (sopato) Date: Tue, 29 Jul 2014 05:52:08 -0400 Subject: Nginx + boringSSL In-Reply-To: References: Message-ID: Everything is ok , but when add ssl module , such as: ./configure --with-openssl=../boringssl --prefix=/srv1/nginx --with-http_ssl_module the make process is error , what can I do next ? Thanks . Posted at Nginx Forum: http://forum.nginx.org/read.php?2,251740,252100#msg-252100 From braulio at eita.org.br Tue Jul 29 19:49:23 2014 From: braulio at eita.org.br (=?UTF-8?Q?Br=C3=A1ulio_Bhavamitra?=) Date: Tue, 29 Jul 2014 16:49:23 -0300 Subject: Bad default for proxy_cache_key In-Reply-To: <20140630234836.GN1849@mdounin.ru> References: <20140630234836.GN1849@mdounin.ru> Message-ID: Hum, a documentation of this use case is recommended. Personally, for me it is completely unknown and uncommon. Also, isn't caching entirely related to the URL the user used, and has nothing to do with the backend host? On Mon, Jun 30, 2014 at 8:48 PM, Maxim Dounin wrote: > Hello! > > On Sun, Jun 29, 2014 at 06:15:56PM -0300, Br?ulio Bhavamitra wrote: > > > Hello all, > > > > I stucked a while with a config problem where proxy_cache_key default > value > > was $scheme$proxy_host$uri$is_args$args". > > > > Then I realized it really didn't make. A better value > > $scheme$host$uri$is_args$args" is much more reasonable, as the reverse > > proxy requests comes from many server {} with multiple server name and > > aliases. > > > > Shouldn't the default be changed? > > The default key is to identify resources nginx requests from > upstream servers. That is, these are the same: > > server { > server_name bar; > > location / { > proxy_pass http://foo.example.com; > } > } > > server { > server_name bazz; > > location / { > proxy_pass http://foo.example.com; > } > } > > While these are different: > > > server { > server_name foo; > > location / { > set $backend "foo.example.com"; > > if ($user_is_admin) { > set $backend "admin.example.com"; > } > > proxy_pass http://$backed; > } > } > > > If in your case multiple such resources are equal or different > based on other factors (likely, due to "proxy_set_header Host ..." > in your configuration), you are free to change proxy_cache_key > accordingly. > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- "Lute pela sua ideologia. Seja um com sua ideologia. Viva pela sua ideologia. Morra por sua ideologia" P.R. Sarkar EITA - Educa??o, Informa??o e Tecnologias para Autogest?o http://cirandas.net/brauliobo http://eita.org.br "Paramapurusha ? meu pai e Parama Prakriti ? minha m?e. O universo ? meu lar e todos n?s somos cidad?os deste cosmo. Este universo ? a imagina??o da Mente Macroc?smica, e todas as entidades est?o sendo criadas, preservadas e destru?das nas fases de extrovers?o e introvers?o do fluxo imaginativo c?smico. No ?mbito pessoal, quando uma pessoa imagina algo em sua mente, naquele momento, essa pessoa ? a ?nica propriet?ria daquilo que ela imagina, e ningu?m mais. Quando um ser humano criado mentalmente caminha por um milharal tamb?m imaginado, a pessoa imaginada n?o ? a propriedade desse milharal, pois ele pertence ao indiv?duo que o est? imaginando. Este universo foi criado na imagina??o de Brahma, a Entidade Suprema, por isso a propriedade deste universo ? de Brahma, e n?o dos microcosmos que tamb?m foram criados pela imagina??o de Brahma. Nenhuma propriedade deste mundo, mut?vel ou imut?vel, pertence a um indiv?duo em particular; tudo ? o patrim?nio comum de todos." Restante do texto em http://cirandas.net/brauliobo/blog/a-problematica-de-hoje-em-dia -------------- next part -------------- An HTML attachment was scrubbed... URL: From r1ch+nginx at teamliquid.net Tue Jul 29 20:46:03 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 29 Jul 2014 22:46:03 +0200 Subject: Support for 3rd party zlib libraries Message-ID: Hello, I recently came across a modified version of zlib with code contributed by Intel [1] that makes use of modern CPU instructions to increase performance. In testing, the performance gains seemed substantial, however when I tried to use this version with nginx, the following alert types appeared in the error_log on gzip requests: 2014/07/30 05:20:29 [alert] 22285#0: *739837 gzip filter failed to use preallocated memory: 65536 of 65520 while sending to client 2014/07/30 05:40:42 [alert] 29462#0: *230460 gzip filter failed to use preallocated memory: 1040 of 4112 It appears nginx is pre-allocating a buffer based on the original zlib memory usage patterns, however the Intel optimized version has slightly higher memory requirements due to padding for SSE functions etc. Is there a chance this version could be supported by nginx, or a configuration option made available to control the allocation size? Thanks. [1] https://github.com/jtkukunas/zlib -------------- next part -------------- An HTML attachment was scrubbed... URL: From agentzh at gmail.com Tue Jul 29 21:23:01 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 29 Jul 2014 14:23:01 -0700 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: Hello! On Tue, Jul 29, 2014 at 1:46 PM, Richard Stanway wrote: > I recently came across a modified version of zlib with code contributed by > Intel [1] that makes use of modern CPU instructions to increase performance. > In testing, the performance gains seemed substantial, however when I tried > to use this version with nginx, the following alert types appeared in the > error_log on gzip requests: > [...] > > Is there a chance this version could be supported by nginx, or a > configuration option made available to control the allocation size? > Well, I used to write a patch to enable IPP zlib (8.0) support in NGINX (enabled by ./configure --with-ipp-zlib), just for your reference: # HG changeset patch # User Yichun Zhang # Date 1406668777 25200 # Tue Jul 29 14:19:37 2014 -0700 # Node ID 2a54efe7a747af2f70cb8af0cff62910d6b84a7f # Parent c038cc33739bbfab2ed50819191298471f22d233 Gzip: added support for IPP zlib 8.0. This feature can now be enabled by ./configure --with-zlib-ipp. diff -r c038cc33739b -r 2a54efe7a747 auto/lib/zlib/conf --- a/auto/lib/zlib/conf Fri Jul 25 14:43:29 2014 -0700 +++ b/auto/lib/zlib/conf Tue Jul 29 14:19:37 2014 -0700 @@ -6,6 +6,15 @@ if [ $ZLIB != NONE ]; then CORE_INCS="$CORE_INCS $ZLIB" + if [ "$ZLIB_IPP" = YES ]; then +cat << END + +$0: error: option --with-zlib-ipp conflicts with --with-zlib=. + +END + exit 1 + fi + case "$NGX_CC_NAME" in msvc* | owc* | bcc) @@ -53,18 +62,26 @@ else ngx_feature_incs="#include " ngx_feature_path= ngx_feature_libs="-lz" + ngx_feature_test="z_stream z; deflate(&z, Z_NO_FLUSH)" . auto/feature if [ $ngx_found = yes ]; then CORE_LIBS="$CORE_LIBS $ngx_feature_libs" - ZLIB=YES + + if [ "$ZLIB_IPP" = YES ]; then + have=NGX_HAVE_ZLIB_IPP . auto/have + ZLIB=IPP + else + ZLIB=YES + fi + ngx_found=no fi fi - if [ $ZLIB != YES ]; then + if [ $ZLIB != YES -a $ZLIB != IPP ]; then cat << END $0: error: the HTTP gzip module requires the zlib library. diff -r c038cc33739b -r 2a54efe7a747 auto/options --- a/auto/options Fri Jul 25 14:43:29 2014 -0700 +++ b/auto/options Tue Jul 29 14:19:37 2014 -0700 @@ -133,6 +133,7 @@ SHA1_OPT= SHA1_ASM=NO USE_ZLIB=NO +ZLIB_IPP=NO ZLIB=NONE ZLIB_OPT= ZLIB_ASM=NO @@ -299,6 +300,7 @@ use the \"--without-http_limit_conn_modu --with-sha1-opt=*) SHA1_OPT="$value" ;; --with-sha1-asm) SHA1_ASM=YES ;; + --with-zlib-ipp) ZLIB_IPP=YES ;; --with-zlib=*) ZLIB="$value" ;; --with-zlib-opt=*) ZLIB_OPT="$value" ;; --with-zlib-asm=*) ZLIB_ASM="$value" ;; diff -r c038cc33739b -r 2a54efe7a747 auto/summary --- a/auto/summary Fri Jul 25 14:43:29 2014 -0700 +++ b/auto/summary Tue Jul 29 14:19:37 2014 -0700 @@ -65,6 +65,7 @@ esac case $ZLIB in YES) echo " + using system zlib library" ;; + IPP) echo " + using IPP zlib library" ;; NONE) echo " + zlib library is not used" ;; *) echo " + using zlib library: $ZLIB" ;; esac diff -r c038cc33739b -r 2a54efe7a747 src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c Fri Jul 25 14:43:29 2014 -0700 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Tue Jul 29 14:19:37 2014 -0700 @@ -521,7 +521,18 @@ ngx_http_gzip_filter_memory(ngx_http_req * *) 5920 bytes on amd64 and sparc64 */ +#if NGX_HAVE_ZLIB_IPP + /* Below is from deflate.c in ipp-samples.8.0.0.005 */ + + if (wbits == 8) { + wbits = 9; + } + + ctx->allocated = 8192 + 5 * (1 << (memlevel + 6)) + (1 << (wbits + 1)) + + (1 << (wbits + 2)) + (1 << (memlevel + 9)); +#else ctx->allocated = 8192 + (1 << (wbits + 2)) + (1 << (memlevel + 9)); +#endif } -------------- next part -------------- A non-text attachment was scrubbed... Name: ipp-zlib.patch Type: text/x-patch Size: 3411 bytes Desc: not available URL: From r1ch+nginx at teamliquid.net Tue Jul 29 22:47:25 2014 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Wed, 30 Jul 2014 00:47:25 +0200 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: > > > Well, I used to write a patch to enable IPP zlib (8.0) support in > NGINX (enabled by ./configure --with-ipp-zlib), just for your > reference: > > Thank you for the patch. This solves the issue with streamed responses, however when the "if (r->headers_out.content_length_n > 0)" branch is taken, eg with static content, I still receive the 2nd alert type below. The loop which reduces wbits and memlevel seems like it may need adjusting, I will try to figure out the exact values but for now I commented it out to avoid the alerts at the expense of some wasted memory. 014/07/29 17:40:00 [alert] 22854#0: *15 gzip filter failed to use preallocated memory: 65536 of 34800 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Jul 29 23:01:38 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jul 2014 03:01:38 +0400 Subject: Bad default for proxy_cache_key In-Reply-To: References: <20140630234836.GN1849@mdounin.ru> Message-ID: <20140729230137.GT1849@mdounin.ru> Hello! On Tue, Jul 29, 2014 at 04:49:23PM -0300, Br?ulio Bhavamitra wrote: > Hum, a documentation of this use case is recommended. Personally, for me it > is completely unknown and uncommon. > > Also, isn't caching entirely related to the URL the user used, and has > nothing to do with the backend host? The caching is related to the URL of the resource, and that's what you write in "proxy_pass". The original URL of a resource requested by the client in many cases has nothing to do with the URL of the resource nginx requests with proxy_pass. They match only in very simple configurations when you just proxy everything without any modifications in nginx. -- Maxim Dounin http://nginx.org/ From agentzh at gmail.com Tue Jul 29 23:06:11 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 29 Jul 2014 16:06:11 -0700 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: Hello! On Tue, Jul 29, 2014 at 3:47 PM, Richard Stanway wrote: > Thank you for the patch. This solves the issue with streamed responses, > however when the "if (r->headers_out.content_length_n > 0)" branch is taken, > eg with static content, I still receive the 2nd alert type below. Oh, we should probably skip that condition altogether for IPP zlib. The formula is accurate and was copied directly from the IPP zlib source code. Try this additional patch: diff -r 2a54efe7a747 src/http/modules/ngx_http_gzip_filter_module.c --- a/src/http/modules/ngx_http_gzip_filter_module.c Tue Jul 29 14:19:37 2014 -0700 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Tue Jul 29 16:06:03 2014 -0700 @@ -492,6 +492,7 @@ ngx_http_gzip_filter_memory(ngx_http_req wbits = conf->wbits; memlevel = conf->memlevel; +#if !NGX_HAVE_ZLIB_IPP if (r->headers_out.content_length_n > 0) { /* the actual zlib window size is smaller by 262 bytes */ @@ -505,6 +506,7 @@ ngx_http_gzip_filter_memory(ngx_http_req memlevel = 1; } } +#endif ctx->wbits = wbits; ctx->memlevel = memlevel; From piotr at cloudflare.com Tue Jul 29 23:09:27 2014 From: piotr at cloudflare.com (Piotr Sikora) Date: Tue, 29 Jul 2014 16:09:27 -0700 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: Hey Yichun, > Oh, we should probably skip that condition altogether for IPP zlib. > The formula is accurate and was copied directly from the IPP zlib > source code. Try this additional patch: Just to make this clear, the zlib library that Richard is referring to is a fork of standard zlib (like ours), not IPP zlib. Best regards, Piotr Sikora From agentzh at gmail.com Tue Jul 29 23:14:09 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 29 Jul 2014 16:14:09 -0700 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: Hello! On Tue, Jul 29, 2014 at 4:09 PM, Piotr Sikora wrote: > > Just to make this clear, the zlib library that Richard is referring to > is a fork of standard zlib (like ours), not IPP zlib. > Okay, I see. Thank you for pointing that out :) Regards, -agentzh From mdounin at mdounin.ru Wed Jul 30 00:05:50 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jul 2014 04:05:50 +0400 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: <20140730000550.GU1849@mdounin.ru> Hello! On Tue, Jul 29, 2014 at 04:06:11PM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > On Tue, Jul 29, 2014 at 3:47 PM, Richard Stanway wrote: > > Thank you for the patch. This solves the issue with streamed responses, > > however when the "if (r->headers_out.content_length_n > 0)" branch is taken, > > eg with static content, I still receive the 2nd alert type below. > > Oh, we should probably skip that condition altogether for IPP zlib. > The formula is accurate and was copied directly from the IPP zlib > source code. Try this additional patch: > > diff -r 2a54efe7a747 src/http/modules/ngx_http_gzip_filter_module.c > --- a/src/http/modules/ngx_http_gzip_filter_module.c Tue Jul 29 > 14:19:37 2014 -0700 > +++ b/src/http/modules/ngx_http_gzip_filter_module.c Tue Jul 29 > 16:06:03 2014 -0700 > @@ -492,6 +492,7 @@ ngx_http_gzip_filter_memory(ngx_http_req > wbits = conf->wbits; > memlevel = conf->memlevel; > > +#if !NGX_HAVE_ZLIB_IPP > if (r->headers_out.content_length_n > 0) { > > /* the actual zlib window size is smaller by 262 bytes */ Skipping this block is a bad idea - it means that small responses will allocate many unneeded memory. (And, as already pointed out by Piotr, the original question isn't about IPP zlib.) -- Maxim Dounin http://nginx.org/ From matt at eatsleeprepeat.net Wed Jul 30 02:14:05 2014 From: matt at eatsleeprepeat.net (Matt Silverlock) Date: Wed, 30 Jul 2014 10:14:05 +0800 Subject: Repeated include /etc/includes/ssl.conf Passes configtest, fails SSL Handshake Message-ID: Hi all, Had a chat with a helpful person on IRC but both are stumped as to why my configuration passes a check (nginx -t) but fails to properly handle SSL. ? I?ve split a couple of repetitive blocks out into /etc/nginx/includes/ssl.conf (-rw-r--r-- root:root - same as nginx.conf - should not be a problem) ? Doing so results in SSL handshake issues (and the connection fails appropriately) ? My cert covers both the root domain and www ? An excerpt of my configuration is here: http://p.ngx.cc/8796278344c60dcb ? but the relevant part is below: # re-direct non-www https to https server { listen 443 ssl; server_name example.com; include /etc/nginx/includes/ssl.conf; return 301 https://www.example.com$request_uri; } server { listen 443 ssl default_server; server_name www.example.com; include /etc/nginx/includes/ssl.conf; root /srv/www/www.example.com/public; error_page 502 503 504 /5xx.html; # rest of config (proxy pass to Go server) # STS header in location block, etc. } If I move the include directive (effectively removing the duplication) into the http block and put the ssl_certificate and ssl_certificate_key directives into each of the two (2) server blocks instead of includes/ssl.conf, all is well. But this conflicts with the documentation (as I interpret it) and still results in some duplicated configuration. Ideally I want to drop the entire ?SSL config? for these two domains into a includes file that I can then just import into the server blocks. If that?s not entirely possible, that?s okay ? but configs I?ve seen out in the wild (https://github.com/igrigorik/istlsfastyet.com/blob/master/nginx/includes/ssl.conf) seem to do what I?m trying to achieve :) Cheers, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jul 30 02:17:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 30 Jul 2014 06:17:11 +0400 Subject: Support for 3rd party zlib libraries In-Reply-To: References: Message-ID: <20140730021711.GW1849@mdounin.ru> Hello! On Tue, Jul 29, 2014 at 10:46:03PM +0200, Richard Stanway wrote: > Hello, > I recently came across a modified version of zlib with code contributed by > Intel [1] that makes use of modern CPU instructions to increase > performance. In testing, the performance gains seemed substantial, however > when I tried to use this version with nginx, the following alert types > appeared in the error_log on gzip requests: > > 2014/07/30 05:20:29 [alert] 22285#0: *739837 gzip filter failed to use > preallocated memory: 65536 of 65520 while sending to client > 2014/07/30 05:40:42 [alert] 29462#0: *230460 gzip filter failed to use > preallocated memory: 1040 of 4112 > > It appears nginx is pre-allocating a buffer based on the original zlib > memory usage patterns, however the Intel optimized version has slightly > higher memory requirements due to padding for SSE functions etc. > > Is there a chance this version could be supported by nginx, or a > configuration option made available to control the allocation size? > > Thanks. > > [1] https://github.com/jtkukunas/zlib This version indeed uses larger allocations (much larger in some cases), and also uses different sizes which confuses nginx code when it tries to allocate 8k for deflate state. Quick hack to do correct preallocations for this zlib version: --- a/src/http/modules/ngx_http_gzip_filter_module.c Wed Jul 09 12:27:15 2014 -0700 +++ b/src/http/modules/ngx_http_gzip_filter_module.c Wed Jul 30 05:57:50 2014 +0400 @@ -521,7 +521,16 @@ ngx_http_gzip_filter_memory(ngx_http_req * *) 5920 bytes on amd64 and sparc64 */ +#if 0 ctx->allocated = 8192 + (1 << (wbits + 2)) + (1 << (memlevel + 9)); +#endif + + if (conf->level == 1) { + wbits = 13; + } + + ctx->allocated = 8192 + 16 + (1 << (wbits + 2)) + + (1 << (ngx_max(memlevel, 8) + 8)) + (1 << (memlevel + 8)); } @@ -987,7 +996,7 @@ ngx_http_gzip_filter_alloc(void *opaque, alloc = items * size; - if (alloc % 512 != 0 && alloc < 8192) { + if (alloc % 512 != 0 && alloc < 8192 && items == 1) { /* * The zlib deflate_state allocation, it takes about 6K, This is expected to work with normal zlib as well, but will be suboptimal from memory usage point of view. Note well that alerts logged aren't really serious - nginx is able to handle such cases properly, and will fallback to normal pool allocations instead of using preallocated block. It's just not something it expects to ever happen, so it logs alerts to make sure it's noticed. With different zlib libraries out there, we probably want to just silence the alert, keeping preallocation tuned for normal zlib. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Wed Jul 30 11:52:45 2014 From: nginx-forum at nginx.us (Gona) Date: Wed, 30 Jul 2014 07:52:45 -0400 Subject: rewrite url in upstream block In-Reply-To: <12bea14e5cb4f2e1bf76d14c4bbba2f1.NginxMailingListEnglish@forum.nginx.org> References: <12bea14e5cb4f2e1bf76d14c4bbba2f1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <4b585b2a86dc5406ac38bf7667052881.NginxMailingListEnglish@forum.nginx.org> Any help on this really appreciate. The request handler is in Lua. It basically breaks a request in to sub requests, adds a query parameter to each sub request and directs them through an consistent hash upstream module in C. The upstream configuration, reads the query parameter and sets it to the command variable. I don't see a way to remove the query parameter after this step. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,250892,252151#msg-252151 From badalex at gmail.com Wed Jul 30 20:05:19 2014 From: badalex at gmail.com (Alex Hunsaker) Date: Wed, 30 Jul 2014 14:05:19 -0600 Subject: Nginx + boringSSL In-Reply-To: References: Message-ID: On Tue, Jul 29, 2014 at 3:52 AM, sopato wrote: > Everything is ok , but when add ssl module , such as: > > ./configure --with-openssl=../boringssl --prefix=/srv1/nginx > --with-http_ssl_module > > the make process is error , what can I do next ? Can you paste the error? Also note, I've only tried it on OpenBSD but I don't see anything that would break it on say Linux. Assuming boringssl compiled correctly. From shmick at riseup.net Thu Jul 31 03:52:07 2014 From: shmick at riseup.net (shmick at riseup.net) Date: Thu, 31 Jul 2014 13:52:07 +1000 Subject: Nginx + boringSSL In-Reply-To: References: Message-ID: <53D9BD67.3070708@riseup.net> Alex Hunsaker wrote: > On Tue, Jul 29, 2014 at 3:52 AM, sopato wrote: >> Everything is ok , but when add ssl module , such as: >> >> ./configure --with-openssl=../boringssl --prefix=/srv1/nginx >> --with-http_ssl_module >> >> the make process is error , what can I do next ? > > Can you paste the error? Also note, I've only tried it on OpenBSD but > I don't see anything that would break it on say Linux. Assuming > boringssl compiled correctly. > go here and check info for boringssl: and it works; ive got chacha20 going https://calomel.org/nginx.html From nginx-forum at nginx.us Thu Jul 31 10:47:01 2014 From: nginx-forum at nginx.us (husseingalal) Date: Thu, 31 Jul 2014 06:47:01 -0400 Subject: postpone_gzipping Message-ID: Hi, I encountered the directive postpone_gzipping but i couldnt find an explanation in the documentation although i found the directive in the source code of nginx, how is that directive different from gzip_min_length? Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252171,252171#msg-252171 From mdounin at mdounin.ru Thu Jul 31 11:44:41 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 31 Jul 2014 15:44:41 +0400 Subject: postpone_gzipping In-Reply-To: References: Message-ID: <20140731114441.GH1849@mdounin.ru> Hello! On Thu, Jul 31, 2014 at 06:47:01AM -0400, husseingalal wrote: > Hi, > I encountered the directive postpone_gzipping but i couldnt find an > explanation in the documentation although i found the directive in the > source code of nginx, how is that directive different from gzip_min_length? The original idea is to save CPU cycles by avoiding small deflate() operations, and buffering up to specified amount of data before calling deflate() instead. It's and old experiment and believed to have bugs, don't use it unless you are ready to dig into the code. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Thu Jul 31 14:37:11 2014 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 31 Jul 2014 18:37:11 +0400 Subject: Repeated include /etc/includes/ssl.conf Passes configtest, fails SSL Handshake In-Reply-To: References: Message-ID: <20140731143710.GK1849@mdounin.ru> Hello! On Wed, Jul 30, 2014 at 10:14:05AM +0800, Matt Silverlock wrote: > Hi all, > > Had a chat with a helpful person on IRC but both are stumped as > to why my configuration passes a check (nginx -t) but fails to > properly handle SSL. > > ? I?ve split a couple of repetitive blocks out into > /etc/nginx/includes/ssl.conf (-rw-r--r-- root:root - same as > nginx.conf - should not be a problem) > ? Doing so results in SSL handshake issues (and the connection > fails appropriately) [...] > If I move the include directive (effectively removing the > duplication) into the http block and put the ssl_certificate and > ssl_certificate_key directives into each of the two (2) server > blocks instead of includes/ssl.conf, all is well. But this > conflicts with the documentation (as I interpret it) and still > results in some duplicated configuration. It's good idea to show _full_ config which shows the problem. The snipped you've showed looks fine and expected to work, but it's easy to make things wrong by some hardly noticeable mistake - e.g., missing semicolon. It's also a good idea to take a look into error log - it may have something for you. BTW, as long as there is only one certificate, it's expected to work fine with all ssl options at http{} levels. You don't need to put ssl_certificate and ssl_certificate_key into server{} blocks. -- Maxim Dounin http://nginx.org/ From nginx-forum at nginx.us Thu Jul 31 14:39:47 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 31 Jul 2014 10:39:47 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: I also noticed you added the PHP and Nginx User setups for security would you also add a FTP / MySQL option even though it is easy for us to just edit the vb scripts to suit our needs for other services but i was just thinking for others.(Maybe they are lazy) I am not sure if anyone else uses the following program https://bitsum.com/processlasso/ but for me in a server enviorment it works wounders i can set the CPU affinities and seperate Nginx from PHP to its own CPU Cores. But i am curious if it is a bad thing to do this when i have "worker_processes auto;" set to be auto and create the number of Nginx instances for the number of CPU cores avaliable. http://nginx.org/en/docs/ngx_core_module.html#worker_processes Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252180#msg-252180 From nginx-forum at nginx.us Thu Jul 31 15:39:18 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 31 Jul 2014 11:39:18 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> c0nw0nk Wrote: ------------------------------------------------------- > I also noticed you added the PHP and Nginx User setups for security > would you also add a FTP / MySQL option even though it is easy for us > to just edit the vb scripts to suit our needs for other services but i > was just thinking for others.(Maybe they are lazy) The way we made those scripts show that anything is possible with Windows with security in mind and minimal effort, there is no excuse for not securing nginx / php or laziness anymore. > PHP to its own CPU Cores. But i am curious if it is a bad thing to do > this when i have "worker_processes auto;" set to be auto and create > the number of Nginx instances for the number of CPU cores avaliable. > http://nginx.org/en/docs/ngx_core_module.html#worker_processes Whatever works best for you, there are many tools to force cpu affinity, for some apps 1 worker/cpu works best for other apps 2 workers/cpu works better. There is no clear guideline other then testing/tuning everything not just nginx. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252181#msg-252181 From nginx-forum at nginx.us Thu Jul 31 16:15:55 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 31 Jul 2014 12:15:55 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Thats what i have been doing not enocunterd any issues as such yet with Nginx or PHP i am also curious if it possible to execute compression of images via Nginx, For those of us who use CloudFlare.com already know that cloudflare performs lossless image compression most likely the same way via linux. On windows we have the following tool avaliable what just executes a series of command line tools to compress images. http://nikkhokkho.sourceforge.net/static.php?page=FileOptimizer aswell as various other files zip, rar, gzip, png ,jpeg the list is endless. But to save having to compress images manualy especialy if dealing with a site that takes image/media uploads could we not have nginx execute the program via a command line module for images it is serving. I looked through the modules list the only one i could find that might make use of the exec function is the following. http://wiki.nginx.org/HttpEchoModule http://wiki.nginx.org/3rdPartyModules Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252182#msg-252182 From nginx-forum at nginx.us Thu Jul 31 17:06:28 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 31 Jul 2014 13:06:28 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <87b175cf53d4ac71da178db554ddf6fd.NginxMailingListEnglish@forum.nginx.org> I also see LUA can do the job but i get the feeling i will hit a dead end if i did this. location /compress-images { content_by_lua 'os.execute("C:/server/bin/compress.exe")'; } Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252183#msg-252183 From nginx-forum at nginx.us Thu Jul 31 17:32:04 2014 From: nginx-forum at nginx.us (itpp2012) Date: Thu, 31 Jul 2014 13:32:04 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: <87b175cf53d4ac71da178db554ddf6fd.NginxMailingListEnglish@forum.nginx.org> References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> <87b175cf53d4ac71da178db554ddf6fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5d36ddecb1704797800e70e6eab24010.NginxMailingListEnglish@forum.nginx.org> The trick with pre-compressed files is to have a separate process doing the compression and doing a test inside nginx for the existence of this compressed file. Ea. if file.jpg.extracompressed exists then serve directly from filesystem else do something with zlib. Ea2. http://nginx.org/en/docs/http/ngx_http_gzip_static_module.html http://www.cambus.net/serving-precompressed-content-with-nginx-and-zopfli/ Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252184#msg-252184 From nginx-forum at nginx.us Thu Jul 31 18:45:23 2014 From: nginx-forum at nginx.us (c0nw0nk) Date: Thu, 31 Jul 2014 14:45:23 -0400 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: <5d36ddecb1704797800e70e6eab24010.NginxMailingListEnglish@forum.nginx.org> References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> <87b175cf53d4ac71da178db554ddf6fd.NginxMailingListEnglish@forum.nginx.org> <5d36ddecb1704797800e70e6eab24010.NginxMailingListEnglish@forum.nginx.org> Message-ID: <0c17a3e386e5a5bb575468d42e9fb4d8.NginxMailingListEnglish@forum.nginx.org> Well what i was describing was to compress the original media items. Saving storage/disk space. Posted at Nginx Forum: http://forum.nginx.org/read.php?2,252064,252185#msg-252185 From agentzh at gmail.com Thu Jul 31 23:21:57 2014 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Thu, 31 Jul 2014 16:21:57 -0700 Subject: [ANN] Windows nginx 1.7.4.2 WhiteRabbit In-Reply-To: <87b175cf53d4ac71da178db554ddf6fd.NginxMailingListEnglish@forum.nginx.org> References: <882d8b6679726359f711e416c788bcf0.NginxMailingListEnglish@forum.nginx.org> <951564277fe0c565171b8cfedb3aa9a3.NginxMailingListEnglish@forum.nginx.org> <944c2df923c7e4645445f3e1e964fffd.NginxMailingListEnglish@forum.nginx.org> <87b175cf53d4ac71da178db554ddf6fd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello! On Thu, Jul 31, 2014 at 10:06 AM, c0nw0nk wrote: > I also see LUA can do the job but i get the feeling i will hit a dead end if > i did this. > > location /compress-images { > content_by_lua 'os.execute("C:/server/bin/compress.exe")'; > } > Oh no, os.execute() is blocking. You should avoid that whenever possible :) Regards, -agentzh