From sirtcp at gmail.com Sun May 1 08:16:51 2016 From: sirtcp at gmail.com (Muhammad Yousuf Khan) Date: Sun, 1 May 2016 13:16:51 +0500 Subject: Trailing Slash redirection poblem In-Reply-To: <20160430084006.GR9435@daoine.org> References: <20160430084006.GR9435@daoine.org> Message-ID: >>The configuration you have shown says "if the request is for /live/, >>ask the browser to instead request /live". >>The configuration you have not shown says "if the request is for /live, >>ask the browser to instead request /live/". Thanks for the tip I think I got the problem but could not resolve it. try_files $uri $uri.html $uri/ /index.php?q=$request_uri; I even tried deleting ?$uri.html $uri/? but still creating loop. I know this is a important line to display wordpress pages . now the error turn 404 from redirection loop. >>You should not have both of those in the same configuration file, or >>you get a loop. Yes I got this now as shared. Thanks for the tip again. >>The not-shown configuration is usually a very good idea if "/live" >>is to be served from the filesystem and corresponds to a directory. >>So: why do you want to remove the trailing slash, in the shown >>configuration? Actually this is a requirement from my client. >>If you want /live to redirect to /live/, then you should configure thing >>such that /live/ does not redirect to /live. No I want /live/ to be redirected to /live and the rule which is redirecting /live to /live/ and ending up as loop Is also important. Any helpful advice is highly appreciated. On Sat, Apr 30, 2016 at 1:40 PM, Francis Daly wrote: > On Sat, Apr 30, 2016 at 12:47:20PM +0500, Muhammad Yousuf Khan wrote: > > Hi there, > > > I have been trying to remove the trailing slash with this redirection > rule. > > rewrite ^/(.*)/$ /$1 permanent; > > > > however it is creating a loop. > > > > curl -I https://xxxx.com/live/ > > > > HTTP/1.1 301 Moved Permanently > > Location: https://xxxx.com/live > > > curl -I https://xxxx.com/live > > > > HTTP/1.1 301 Moved Permanently > > Location: https://xxxx.com/live/ > > > Can you please guide what i am doing wrong here. > > The configuration you have shown says "if the request is for /live/, > ask the browser to instead request /live". > > The configuration you have not shown says "if the request is for /live, > ask the browser to instead request /live/". > > You should not have both of those in the same configuration file, or > you get a loop. > > The not-shown configuration is usually a very good idea if "/live" > is to be served from the filesystem and corresponds to a directory. > > So: why do you want to remove the trailing slash, in the shown > configuration? > > If you want /live to redirect to /live/, then you should configure thing > such that /live/ does not redirect to /live. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sirtcp at gmail.com Sun May 1 08:18:55 2016 From: sirtcp at gmail.com (Muhammad Yousuf Khan) Date: Sun, 1 May 2016 13:18:55 +0500 Subject: Trailing Slash redirection poblem In-Reply-To: References: <20160430084006.GR9435@daoine.org> Message-ID: just want to make it more clear that when i comment out this line #try_files $uri $uri.html $uri/ /index.php?q=$request_uri; error become 404 from redirection loop error. On Sun, May 1, 2016 at 1:16 PM, Muhammad Yousuf Khan wrote: > > >>The configuration you have shown says "if the request is for /live/, > >>ask the browser to instead request /live". > > >>The configuration you have not shown says "if the request is for /live, > >>ask the browser to instead request /live/". > > Thanks for the tip I think I got the problem but could not resolve it. > try_files $uri $uri.html $uri/ /index.php?q=$request_uri; > I even tried deleting ?$uri.html $uri/? but still creating loop. I know > this is a important line to display wordpress pages . now the error turn > 404 from redirection loop. > > >>You should not have both of those in the same configuration file, or > >>you get a loop. > > Yes I got this now as shared. Thanks for the tip again. > > >>The not-shown configuration is usually a very good idea if "/live" > >>is to be served from the filesystem and corresponds to a directory. > > >>So: why do you want to remove the trailing slash, in the shown > >>configuration? > Actually this is a requirement from my client. > > >>If you want /live to redirect to /live/, then you should configure thing > >>such that /live/ does not redirect to /live. > > No I want /live/ to be redirected to /live and the rule which is > redirecting /live to /live/ and ending up as loop Is also important. Any > helpful advice is highly appreciated. > > > On Sat, Apr 30, 2016 at 1:40 PM, Francis Daly wrote: > >> On Sat, Apr 30, 2016 at 12:47:20PM +0500, Muhammad Yousuf Khan wrote: >> >> Hi there, >> >> > I have been trying to remove the trailing slash with this redirection >> rule. >> > rewrite ^/(.*)/$ /$1 permanent; >> > >> > however it is creating a loop. >> > >> > curl -I https://xxxx.com/live/ >> > >> > HTTP/1.1 301 Moved Permanently >> > Location: https://xxxx.com/live >> >> > curl -I https://xxxx.com/live >> > >> > HTTP/1.1 301 Moved Permanently >> > Location: https://xxxx.com/live/ >> >> > Can you please guide what i am doing wrong here. >> >> The configuration you have shown says "if the request is for /live/, >> ask the browser to instead request /live". >> >> The configuration you have not shown says "if the request is for /live, >> ask the browser to instead request /live/". >> >> You should not have both of those in the same configuration file, or >> you get a loop. >> >> The not-shown configuration is usually a very good idea if "/live" >> is to be served from the filesystem and corresponds to a directory. >> >> So: why do you want to remove the trailing slash, in the shown >> configuration? >> >> If you want /live to redirect to /live/, then you should configure thing >> such that /live/ does not redirect to /live. >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Sun May 1 14:43:36 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 1 May 2016 16:43:36 +0200 Subject: Trailing Slash redirection poblem In-Reply-To: References: <20160430084006.GR9435@daoine.org> Message-ID: So these opposite redirection rules are fighting each other and are the source of your problem, which Francis helped you to alleviate. Sit back, grab an erasable whiteboard, scratch your head and think about your website's design: both rules probably have their use in their own corner, but you shall not have them activated at the same spot at any given time, as it will result in an infinite loop: a computer does exactly what you tell it to do. There is nothing stupider than that. The problem lies between the desk and the chair. ;o) ?Do not merely copy-paste pre-cooked Wordpress-hypothetically-ready configuration directives?. Try to recreate it by hand, understanding the use and intention of every line. nginx docs will most probably come helpful. Enjoy learning! :o) --- *B. R.* On Sun, May 1, 2016 at 10:18 AM, Muhammad Yousuf Khan wrote: > just want to make it more clear that when i comment out this line #try_files > $uri $uri.html $uri/ /index.php?q=$request_uri; error become 404 from > redirection loop error. > > On Sun, May 1, 2016 at 1:16 PM, Muhammad Yousuf Khan > wrote: > >> >> >>The configuration you have shown says "if the request is for /live/, >> >>ask the browser to instead request /live". >> >> >>The configuration you have not shown says "if the request is for /live, >> >>ask the browser to instead request /live/". >> >> Thanks for the tip I think I got the problem but could not resolve it. >> try_files $uri $uri.html $uri/ /index.php?q=$request_uri; >> I even tried deleting ?$uri.html $uri/? but still creating loop. I know >> this is a important line to display wordpress pages . now the error turn >> 404 from redirection loop. >> >> >>You should not have both of those in the same configuration file, or >> >>you get a loop. >> >> Yes I got this now as shared. Thanks for the tip again. >> >> >>The not-shown configuration is usually a very good idea if "/live" >> >>is to be served from the filesystem and corresponds to a directory. >> >> >>So: why do you want to remove the trailing slash, in the shown >> >>configuration? >> Actually this is a requirement from my client. >> >> >>If you want /live to redirect to /live/, then you should configure thing >> >>such that /live/ does not redirect to /live. >> >> No I want /live/ to be redirected to /live and the rule which is >> redirecting /live to /live/ and ending up as loop Is also important. Any >> helpful advice is highly appreciated. >> >> >> On Sat, Apr 30, 2016 at 1:40 PM, Francis Daly wrote: >> >>> On Sat, Apr 30, 2016 at 12:47:20PM +0500, Muhammad Yousuf Khan wrote: >>> >>> Hi there, >>> >>> > I have been trying to remove the trailing slash with this redirection >>> rule. >>> > rewrite ^/(.*)/$ /$1 permanent; >>> > >>> > however it is creating a loop. >>> > >>> > curl -I https://xxxx.com/live/ >>> > >>> > HTTP/1.1 301 Moved Permanently >>> > Location: https://xxxx.com/live >>> >>> > curl -I https://xxxx.com/live >>> > >>> > HTTP/1.1 301 Moved Permanently >>> > Location: https://xxxx.com/live/ >>> >>> > Can you please guide what i am doing wrong here. >>> >>> The configuration you have shown says "if the request is for /live/, >>> ask the browser to instead request /live". >>> >>> The configuration you have not shown says "if the request is for /live, >>> ask the browser to instead request /live/". >>> >>> You should not have both of those in the same configuration file, or >>> you get a loop. >>> >>> The not-shown configuration is usually a very good idea if "/live" >>> is to be served from the filesystem and corresponds to a directory. >>> >>> So: why do you want to remove the trailing slash, in the shown >>> configuration? >>> >>> If you want /live to redirect to /live/, then you should configure thing >>> such that /live/ does not redirect to /live. >>> >>> f >>> -- >>> Francis Daly francis at daoine.org >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shirley at nginx.com Mon May 2 02:38:31 2016 From: shirley at nginx.com (Shirley Bailes) Date: Sun, 1 May 2016 19:38:31 -0700 Subject: nginx.conf 2016: CFP Closing Soon, Submit Your Talk Today Message-ID: Hello folks, The deadline to submit a proposal to nginx.conf 2016 is quickly approaching. Share your NGINX stories with us: https://nginxconf16.busyconf.com/proposals/new/ *Deadline to submit: 11:59PM CDT, May 5, 2016.* Tell us how you?re using NGINX and/or NGINX Plus across any type of application and in any sort of environment. We?d love to see talks on any of the topics below: - Architecting, Developing, & Deploying Code - Scaling and Securing Applications - NGINX-specific and NGINX Plus-Specific Case Studies & Best Practices *Conference details: * - nginx.conf 2016 will be held in Austin, TX - Venue: Hilton Austin, East 4th Street - Dates: September 7-9 Let me know if you have any questions around the submission process, or if you'd like to bounce submission ideas off of us. We'd be happy to help. Best, *s 707.569.4888 -------------- next part -------------- An HTML attachment was scrubbed... URL: From pankajitbhu at gmail.com Mon May 2 07:11:48 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Mon, 2 May 2016 12:41:48 +0530 Subject: (52) Empty reply from server In-Reply-To: <20160428070955.GL9435@daoine.org> References: <20160418233503.GW9435@daoine.org> <20160419174250.GY9435@daoine.org> <20160420193835.GA9435@daoine.org> <20160422185140.GG9435@daoine.org> <20160428070955.GL9435@daoine.org> Message-ID: Hi, thank you for your response. >>You are writing into the headers_in structure. Normally, that is what >>came from the client, so I guess you must have a plan for why you are >>doing that. I have tried to write in headers_out struture also but did not able to read the my written value so tried with headers_in >>The example code I see treats is as a ngx_http_header_t*. i have tried to change return type to ngx_http_header_t but still same result. If i am writing in headers_out "Set-Cookie" value "somevalue" then same value reflecting response header tab while viewing through developer option in mozilla and request header "cookie" value "somevalue". So what is the possible way to read this "cookie" value that is "somevalue". i tried to read many way like using ngx_http_parse_multi_header_lines(),ngx_hash_find() but still not able to get correct value . Below is snapshot of Response and Request header [image: Inline image 1] On Thu, Apr 28, 2016 at 12:39 PM, Francis Daly wrote: > On Tue, Apr 26, 2016 at 04:52:22PM +0530, Pankaj Chaudhary wrote: > > Hi there, > > > I have requirement to create own cookie based on input and wirte the > that > > cookie in header. > > whenever i need that i can read from header and use it. > > I confess that I do not understand what that requirement actually > is. There are headers in the request from the client to nginx; there > may be header-like things in whatever nginx does when communicating > with an upstream; there may be header-like things in the response from > that upstream; and there are headers in the response from nginx to the > client. And it is not clear to me what your module architecture is. > > But that's ok; I don't have to understand it. You want to do some specific > things in an nginx module. > > > for example:- > > > > I have created my own cookie "thissomevalue" worte in header and later > the > > same read from header. > > > > Please check my code and let me know why i am not able to read the value > > from header. > > > > Below code snippet to set header value in request header:- > > > > ngx_table_elt_t *cookie; > > cookie = ngx_list_push(&r->headers_in.headers); > > You are writing into the headers_in structure. Normally, that is what > came from the client, so I guess you must have a plan for why you are > doing that. > > (If I wanted to test "can I read from headers_in", I would probably add a > "MyHeader" to my curl request, and look for that in my code.) > > > cookie->lowcase_key = (u_char*) "cookie"; > > ngx_str_set(&cookie->key, "Cookie"); > > ngx_str_set(&cookie->value, "somevalue"); > > cookie->hash = ngx_crc32_long(cookie->lowcase_key, cookie->key.len); > > > > > > Below code snippet to read set value from header:- > > > > ngx_http_core_main_conf_t *clcf; > > ngx_str_t *type; > > ngx_uint_t key; > > ngx_str_t val = ngx_string("cookie"); > > clcf = ngx_http_get_module_main_conf(r, ngx_http_core_module); > > key= ngx_hash_key_lc(val.data, val.len); > > type = ngx_hash_find(&clcf->headers_in_hash, key, val.data, val.len); > > As mentioned elsewhere, you are not reading from the headers_in > structure. So there's a reasonable chance that what you wrote into one > structure will not be found in another one. > > Also, you are treating the output of ngx_hash_find() as a ngx_str_t*. > > The example code I see treats is as a ngx_http_header_t*. > > Is that an important difference? > > (As in: is that why you print the header name, but not the header > value? Possibly not, if the original request did not have any Cookie > header; but test rather than assume, if the documentation is not clear > to you.) > > > if (type != NULL) > > { > > The example code I see has separate handling for "header is unknown or > is not hashed yet", and "header is hashed but not cached yet". You > seem to skip testing for the second possibility here. > > > ngx_table_elt_t *test_val; > > test_val= ngx_list_push(&r->headers_out.headers); > > test_val->lowcase_key = (u_char*) "test_val"; > > ngx_str_set(&test_val->key, "Test_Val"); > > ngx_str_set(&test_val->value, type->data); > > I'd also suggest that if you are not sure what value your content has, > use the simplest possible method to print it somewhere you can read > it. Usually, that means logging, since that should not have a complex > data structure. > > > test_val->hash = ngx_crc32_long(test_val->lowcase_key, > test_val->key.len); > > } > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 125004 bytes Desc: not available URL: From mayak at australsat.com Mon May 2 08:55:18 2016 From: mayak at australsat.com (mayak) Date: Mon, 2 May 2016 10:55:18 +0200 Subject: binary cgi script download instead of executing Message-ID: An HTML attachment was scrubbed... URL: From me at myconan.net Mon May 2 08:58:50 2016 From: me at myconan.net (Edho Arief) Date: Mon, 02 May 2016 17:58:50 +0900 Subject: binary cgi script download instead of executing In-Reply-To: <20160502085535.4FD952C510D6@mail.nginx.com> References: <20160502085535.4FD952C510D6@mail.nginx.com> Message-ID: <1462179530.591534.595388865.5593E415@webmail.messagingengine.com> Hi, On Mon, May 2, 2016, at 17:55, mayak wrote: > hi all, > i have simply broken my brain trying to execute a `cgi` script that > requires no interpreter -- just execute the cgi binary with the query > portion of the url, and it spits out html content. > no matter what i do, i always end up with ELF> GARBAGE? -- the cgi > script is downloading and not executing. I've googled it to death, and > cannot find how to do this. > thanks > m nginx doesn't execute anything. In your case, something like slowcgi/thttpd is needed so nginx can talk to it. From francis at daoine.org Mon May 2 09:18:12 2016 From: francis at daoine.org (Francis Daly) Date: Mon, 2 May 2016 10:18:12 +0100 Subject: Trailing Slash redirection poblem In-Reply-To: References: <20160430084006.GR9435@daoine.org> Message-ID: <20160502091812.GS9435@daoine.org> On Sun, May 01, 2016 at 01:16:51PM +0500, Muhammad Yousuf Khan wrote: Hi there, > >>The configuration you have shown says "if the request is for /live/, > >>ask the browser to instead request /live". > > >>The configuration you have not shown says "if the request is for /live, > >>ask the browser to instead request /live/". > > Thanks for the tip I think I got the problem but could not resolve it. If you make the request for "/live", what response do you actually want? The contents of some file? The output of wordpress being given some arguments? Be as specific as possible, and it is much more likely that you will understand how nginx needs to be configured to do that. > >>So: why do you want to remove the trailing slash, in the shown > >>configuration? > Actually this is a requirement from my client. Sometimes, the correct response to a client requirement is "no". It is not yet clear to me whether this is one of those times. f -- Francis Daly francis at daoine.org From paul at paulstewart.org Mon May 2 10:08:44 2016 From: paul at paulstewart.org (Paul Stewart) Date: Mon, 2 May 2016 06:08:44 -0400 Subject: Origin/Edge Questions Message-ID: <021b01d1a45a$9c637270$d52a5750$@paulstewart.org> Hi folks. Today, we have several NGINX systems deployed .. most of it is for caching frontend to websites via anycasted instance. A couple of our systems though involve video - streaming of realtime linear encrypted video. For that project, I'm looking to build out scale in the system. Today, there are just a few NGINX systems providing an edge function but to scale, we need to deploy a large scale origin/edge scenario. I keep looking around for reference designs and/or any whitepapers on others who have done this specifically with realtime encrypted linear video and not finding much . any ideas? Thanks, Paul -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Mon May 2 12:34:36 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Mon, 02 May 2016 14:34:36 +0200 Subject: binary cgi script download instead of executing In-Reply-To: <20160502085535.EEAC72C51129@mail.nginx.com> References: <20160502085535.EEAC72C51129@mail.nginx.com> Message-ID: <79054584e0b59ef4a53e9a9f1d1d97a1@none.at> Hi. Am 02-05-2016 10:55, schrieb mayak: > hi all, > > i have simply broken my brain trying to execute a `cgi` script that > requires no interpreter -- just execute the cgi binary with the query > portion of the url, and it spits out html content. > > no matter what i do, i always end up with ELF> GARBAGE -- the cgi > script is downloading and not executing. I've googled it to death, and > cannot find how to do this. please take a look into this. http://uwsgi.readthedocs.io/en/latest/CGI.html?highlight=Nginx As Edho written nginx does not execute anything by default. What I'm wondering why the first hit shows a possible solution, does this solution does not work for you? https://startpage.com/do/search?q=nginx+cgi+run Cheers Aleks From sirtcp at gmail.com Mon May 2 12:48:12 2016 From: sirtcp at gmail.com (Muhammad Yousuf Khan) Date: Mon, 2 May 2016 17:48:12 +0500 Subject: Trailing Slash redirection poblem In-Reply-To: <20160502091812.GS9435@daoine.org> References: <20160430084006.GR9435@daoine.org> <20160502091812.GS9435@daoine.org> Message-ID: first of all my apologies if i question something inapplicable , this could be the result of my less knowledge in nginx. i hope you guys don't mind. Thanks for the reply, actually we have 5 categories. and then we have hundred plus post in each category. now when user click on our category. he sees intro about the category and then list of all posts. when user click on catagory it open up like this https://xxxx.com/catagory/ and when post is clicked it opens up like this https://xxxx.com/catagory/posts.html but now what client want is when some one open up the category it should open up like this. https://xxxx.com/catagory (with out trailing slash) This is all what they want. Thanks, Yousuf On Mon, May 2, 2016 at 2:18 PM, Francis Daly wrote: > On Sun, May 01, 2016 at 01:16:51PM +0500, Muhammad Yousuf Khan wrote: > > Hi there, > > > >>The configuration you have shown says "if the request is for /live/, > > >>ask the browser to instead request /live". > > > > >>The configuration you have not shown says "if the request is for /live, > > >>ask the browser to instead request /live/". > > > > Thanks for the tip I think I got the problem but could not resolve it. > > If you make the request for "/live", what response do you actually want? > > The contents of some file? The output of wordpress being given some > arguments? Be as specific as possible, and it is much more likely that > you will understand how nginx needs to be configured to do that. > > > >>So: why do you want to remove the trailing slash, in the shown > > >>configuration? > > Actually this is a requirement from my client. > > Sometimes, the correct response to a client requirement is "no". > > It is not yet clear to me whether this is one of those times. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 3 09:45:05 2016 From: nginx-forum at forum.nginx.org (ben5192) Date: Tue, 03 May 2016 05:45:05 -0400 Subject: Shared Memory Structure? Message-ID: <1a2ecbdac1bc62369aa163b48eca145a.NginxMailingListEnglish@forum.nginx.org> I am storing things in a shared memory zone allocated with ngx_shared_memory_add and allocating each slab with ngx_slab_alloc_locked (as these items don?t need to be locked particularly). The problem I'm having is that I add the shared memory with exactly the amount I need. However, the final large thing to be allocated fails, but after this a lot more smaller things are also allocated without any problem. There is enough space for it but it has a problem with a large chunk when space is getting tight. Does anyone know why this is or how to fix it? My guess was that nginx structures the shared memory in a way that means there is enough memory (which I have made sure there is) but it is not continuous. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266569,266569#msg-266569 From pasik at iki.fi Tue May 3 16:16:41 2016 From: pasik at iki.fi (Pasi =?iso-8859-1?Q?K=E4rkk=E4inen?=) Date: Tue, 3 May 2016 19:16:41 +0300 Subject: Video & Slides for our first Bay Area OpenResty Meetup In-Reply-To: References: <20160429131856.GS13212@reaktio.net> Message-ID: <20160503161641.GT13212@reaktio.net> Hi, On Fri, Apr 29, 2016 at 03:08:06PM -0700, Yichun Zhang (agentzh) wrote: > Hello! > > On Fri, Apr 29, 2016 at 6:18 AM, Pasi K?rkk?inen wrote: > > > > One question about the new "ngx.balancer" Lua API .. with quick look I didn't notice anything related to upstream healthchecks.. is this something you've been looking at improving, or is it out of scope for this module? > > > > Yes, we could adapt the existing lua-resty-upstream-healthcheck > library to support dynamic upstream peers in the context of > balancer_by_lua*. IIRC, the engineers at Mashape are already working > on it. > > https://github.com/openresty/lua-resty-upstream-healthcheck > > You can fork it yourself if you want to :) > > Also, you may find the following Lua library for balancer_by_lua* too: > > https://github.com/agentzh/lua-resty-chash > Great! Thanks for the pointers. > > Basicly I'm interested in more flexible/configurable upstream server healthchecks (than what's available in stock nginx), when using the http proxy functionality. > > > > Sure, who isn't? ;) > Hehe. indeed :) Thanks, -- Pasi > Best regards, > -agentzh > From francis at daoine.org Tue May 3 19:22:50 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 3 May 2016 20:22:50 +0100 Subject: Trailing Slash redirection poblem In-Reply-To: References: <20160430084006.GR9435@daoine.org> <20160502091812.GS9435@daoine.org> Message-ID: <20160503192250.GU9435@daoine.org> On Mon, May 02, 2016 at 05:48:12PM +0500, Muhammad Yousuf Khan wrote: Hi there, > Thanks for the reply, actually we have 5 categories. and then we have > hundred plus post in each category. now when user click on our category. he > sees intro about the category and then list of all posts. > when user click on catagory it open up like this > https://xxxx.com/catagory/ It sounds to me like that is an internal-to-wordpress thing, so you may be better off removing any clever nginx config and just configure wordpress to create the links that you want it to create. (I do not know if it possible to configure wordpress like that.) > and when post is clicked it opens up like this > https://xxxx.com/catagory/posts.html > > but now what client want is when some one open up the category it should > open up like this. > https://xxxx.com/catagory (with out trailing slash) When someone clicks on catagory, they are following a link in whatever html was returned from their previous request. If that link has a trailing slash, they'll ask for the trailing slash. If the content on the catagory page contains links like "post1.html", then you will need the trailing slash -- or you will need to change the content to be of the form "catagory/post1.html". That should all be configurable within wordpress, if anywhere. Good luck with it, f -- Francis Daly francis at daoine.org From jiangmuhui at gmail.com Wed May 4 03:25:11 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Wed, 4 May 2016 11:25:11 +0800 Subject: HTTP2 Multiplexing Message-ID: Hi Different from HTTP1.1 pipeline, HTTP2 allows multiple request and response messages to be in flight at the same time. I was wondering what the strategy Nginx adopt to implement this main feature. Is every single stream correspond to a thread. If not, how can Nginx provide multiple parallel requests handling. If you can locate the correspond code for me, that would be great Best Regards Muhui Jiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 4 04:18:10 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Wed, 04 May 2016 00:18:10 -0400 Subject: location...nested location...which one is better? Message-ID: <7fd27d4d305d0d0e0bb63c5797d82921.NginxMailingListEnglish@forum.nginx.org> Hi all, I'm running a website which is based on php. And I'm trying to use naxsi for my website. But it seems to enable naxsi, we need to put below line within a location: include /etc/nginx/naxsi.rules; And an interesting thing I found about location, is that people are using different location sections as following: example 1 server { root... location / { # ... location ~ \.php$ { #... } location ~*^.+\.(jpg|jpeg|gif|png|bmp|ico|mp3)$ { #... } } } example 2 server { root... location / { # ... } location ~ \.php$ { #... } location ~*^.+\.(jpg|jpeg|gif|png|bmp|ico|mp3)$ { #... } } example 3 server { root... # ... location ~ \.php$ { #... } location ~*^.+\.(jpg|jpeg|gif|png|bmp|ico|mp3)$ { #... } } For my server, I'm using example 3. So, is there any performance difference between these 3 different configuration? And which one is better? And for naxsi, it's easy to include the configuration file in example 1. But for example 2 and 3, how to include the files? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266588,266588#msg-266588 From nginx-forum at forum.nginx.org Wed May 4 06:39:33 2016 From: nginx-forum at forum.nginx.org (kostbad) Date: Wed, 04 May 2016 02:39:33 -0400 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) Message-ID: I tried to do use the ssltest from qualys.com: https://www.ssllabs.com/ssltest/ Every time i run it, my nginx server (ssl terminator) crashes and i have to restart it. I get the following error in my nginx logs: *734 SSL_do_handshake() failed (SSL: error:140A1175:SSL routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL handshaking, client: ......, server: ....... I've got the following configuration: server{ listen .........:80; add_header Strict-Transport-Security max-age=15768000; server_name .......................; rewrite ^ https://$server_name$uri? permanent; #location / { # proxy_pass ......................:80; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; # } } server { listen ...............:443; ssl on; ssl_certificate /etc/nginx/certkeys/.......crt; ssl_certificate_key /etc/nginx/certkeys/.......key; server_name .....................; access_log /var/log/nginx/running.log; error_log /var/log/nginx/errorReport.log; keepalive_timeout 70; ssl_session_timeout 30m; ssl_protocols TLSv1.2 TLSv1.1 TLSv1; ssl_ciphers HIGH:!aNULL:!MD5; ssl_prefer_server_ciphers on; location / { proxy_pass .....................:80/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266590,266590#msg-266590 From sca at andreasschulze.de Wed May 4 07:26:25 2016 From: sca at andreasschulze.de (A. Schulze) Date: Wed, 04 May 2016 09:26:25 +0200 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: Message-ID: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de> kostbad: > Every time i run it, my nginx server (ssl terminator) crashes and i have to > restart it. > > I get the following error in my nginx logs: > > *734 SSL_do_handshake() failed (SSL: error:140A1175:SSL > routines:SSL_BYTES_TO_CIPHER_LIST:inappropriate fallback) while SSL > handshaking, client: ......, server: ....... that's not a "crash" Scanning a server using ssllabs generate many error messages of this kind. That's intentional. While the scan run and also after the scan the server should be accessible as usual. If not, that would be an error - true. But the logmessage you presented above don't show such error. Andreas From nginx-forum at forum.nginx.org Wed May 4 07:56:06 2016 From: nginx-forum at forum.nginx.org (kostbad) Date: Wed, 04 May 2016 03:56:06 -0400 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de> References: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de> Message-ID: <1820fa05bf70e4e652e99ad2690eae7c.NginxMailingListEnglish@forum.nginx.org> When ssllabs tests for deprecated cipher suites, it stays there forever. I have to close the ssllabs test page and then my nginx server stays down until i restart it. I also got the following error: 113 upstream prematurely closed connection while reading response header from upstream, client: ...... server: ........, request: "GET /........./images/logo.jpg HTTP/1.1", upstream: "http:.........80/........./images/logo.jpg", host: "..........", referrer: "https://....................." Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266590,266592#msg-266592 From luky-37 at hotmail.com Wed May 4 09:47:16 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 4 May 2016 11:47:16 +0200 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: <1820fa05bf70e4e652e99ad2690eae7c.NginxMailingListEnglish@forum.nginx.org> References: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de>, <1820fa05bf70e4e652e99ad2690eae7c.NginxMailingListEnglish@forum.nginx.org> Message-ID: > When ssllabs tests for deprecated cipher suites, it stays there forever. > I have to close the ssllabs test page and then my nginx server stays down > until i restart it. Please provide the output of?nginx -V. From vbart at nginx.com Wed May 4 10:19:50 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 04 May 2016 13:19:50 +0300 Subject: HTTP2 Multiplexing In-Reply-To: References: Message-ID: <9154086.t1Y2UnK1lN@vbart-laptop> On Wednesday 04 May 2016 11:25:11 Muhui Jiang wrote: > Hi > > Different from HTTP1.1 pipeline, HTTP2 allows multiple request and response > messages to be in flight at the same time. I was wondering what the > strategy Nginx adopt to implement this main feature. Nginx allows multiple request and responses in multiple connections using HTTP/1.x as well. HTTP/2 changes nothing here (except it uses only one connection, but it's not important from the basic architecture point of view). > > Is every single stream correspond to a thread. If not, how can Nginx > provide multiple parallel requests handling. If you can locate the > correspond code for me, that would be great > No, nginx uses asynchronous non-blocking event-driven architecture instead of mapping requests into separate threads. It have used multiplexing of requests handling in single process many years before HTTP/2 was invented. A more detailed explanation can be found here: https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ wbr, Valentin V. Bartenev From jiangmuhui at gmail.com Wed May 4 10:50:44 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Wed, 4 May 2016 18:50:44 +0800 Subject: HTTP2 Multiplexing In-Reply-To: <9154086.t1Y2UnK1lN@vbart-laptop> References: <9154086.t1Y2UnK1lN@vbart-laptop> Message-ID: Hi >Nginx allows multiple request and responses in multiple connections using >HTTP/1.x as well. HTTP/2 changes nothing here (except it uses only one >connection, but it's not important from the basic architecture point of >view). If so, it seems there is no difference or improvement of the implementation on the feature of multiplexing compared with Http/1.1 pipeline. How do you solve the problem Head-of-line blocking occurred in Http/1.1. Best Regards Muhui Jiang 2016-05-04 18:19 GMT+08:00 Valentin V. Bartenev : > On Wednesday 04 May 2016 11:25:11 Muhui Jiang wrote: > > Hi > > > > Different from HTTP1.1 pipeline, HTTP2 allows multiple request and > response > > messages to be in flight at the same time. I was wondering what the > > strategy Nginx adopt to implement this main feature. > > Nginx allows multiple request and responses in multiple connections using > HTTP/1.x as well. HTTP/2 changes nothing here (except it uses only one > connection, but it's not important from the basic architecture point of > view). > > > > > > Is every single stream correspond to a thread. If not, how can Nginx > > provide multiple parallel requests handling. If you can locate the > > correspond code for me, that would be great > > > > No, nginx uses asynchronous non-blocking event-driven architecture > instead of mapping requests into separate threads. > > It have used multiplexing of requests handling in single process many > years before HTTP/2 was invented. > > A more detailed explanation can be found here: > > https://www.nginx.com/blog/inside-nginx-how-we-designed-for-performance-scale/ > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 4 10:58:19 2016 From: nginx-forum at forum.nginx.org (kostbad) Date: Wed, 04 May 2016 06:58:19 -0400 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: References: Message-ID: <3f8368e1dd0b459714a6031420a9dcc0.NginxMailingListEnglish@forum.nginx.org> The nginx version is 1.2.6. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266590,266603#msg-266603 From nginx-forum at forum.nginx.org Wed May 4 11:12:18 2016 From: nginx-forum at forum.nginx.org (Sem9999) Date: Wed, 04 May 2016 07:12:18 -0400 Subject: How to turn off DNS caching Message-ID: Hi, I am trying to connect to my AWS RDS (mysql) instance using the AWS supplied DNS endpoint. The problem is, the IP of the instance changes periodically. It appears that nginx does not resolve the DNS name every time, but caches it. Therefore nginx initially works, then at some point later, things break. How can I turn off the DNS caching? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266604,266604#msg-266604 From vbart at nginx.com Wed May 4 11:18:52 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 04 May 2016 14:18:52 +0300 Subject: HTTP2 Multiplexing In-Reply-To: References: <9154086.t1Y2UnK1lN@vbart-laptop> Message-ID: <8566823.0OJPyq7u0f@vbart-laptop> On Wednesday 04 May 2016 18:50:44 Muhui Jiang wrote: > Hi > > >Nginx allows multiple request and responses in multiple connections using > >HTTP/1.x as well. HTTP/2 changes nothing here (except it uses only one > >connection, but it's not important from the basic architecture point of > >view). > > If so, it seems there is no difference or improvement of the implementation > on the feature of multiplexing compared with Http/1.1 pipeline. > > How do you solve the problem Head-of-line blocking occurred in Http/1.1. > [..] You have mixed multiplexing of data transferring over single connection, which is the feature of HTTP/2, with multiplexing of requests processing inside one process, which is the feature of nginx. These things have no relation to each other. Please, read the article I've pointed out, nginx doesn't need separate threads to process something in parallel. The Head-of-line blocking problem in HTTP/1.1 is presented only if you use limited number of connections, and HTTP/2 solves it in some degree but puts the limit and the problem on different level. With HTTP/2 now you have limit of virtual streams inside HTTP/2 connection and the HOL problem on TCP level. wbr, Valentin V. Bartenev From jim at ohlste.in Wed May 4 11:48:05 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 4 May 2016 07:48:05 -0400 Subject: How to turn off DNS caching In-Reply-To: References: Message-ID: <170C9979-3E02-47CC-8BF9-6497CDCBA686@ohlste.in> Hello, On May 4, 2016, at 7:12 AM, Sem9999 wrote: > Hi, > > I am trying to connect to my AWS RDS (mysql) instance using the AWS supplied > DNS endpoint. > > The problem is, the IP of the instance changes periodically. > > It appears that nginx does not resolve the DNS name every time, but caches > it. > > Therefore nginx initially works, then at some point later, things break. > > How can I turn off the DNS caching? Add a "valid" parameter to your resolver directive: resolver 127.0.0.1 valid=5s; This would set it to cache 5s. http://nginx.org/en/docs/http/ngx_http_core_module.html#resolver -- Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 4 12:27:58 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 May 2016 15:27:58 +0300 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: <3f8368e1dd0b459714a6031420a9dcc0.NginxMailingListEnglish@forum.nginx.org> References: <3f8368e1dd0b459714a6031420a9dcc0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160504122758.GT36620@mdounin.ru> Hello! On Wed, May 04, 2016 at 06:58:19AM -0400, kostbad wrote: > The nginx version is 1.2.6. That's not "nginx -V" output you were asked for. Nevertheless, it's probably enough to conclude you should upgrade before doing anything else. The 1.2.x branch is not supported for more than 3 years now. -- Maxim Dounin http://nginx.org/ From sirtcp at gmail.com Wed May 4 12:52:35 2016 From: sirtcp at gmail.com (Muhammad Yousuf Khan) Date: Wed, 4 May 2016 17:52:35 +0500 Subject: Trailing Slash redirection poblem In-Reply-To: <20160503192250.GU9435@daoine.org> References: <20160430084006.GR9435@daoine.org> <20160502091812.GS9435@daoine.org> <20160503192250.GU9435@daoine.org> Message-ID: Thanks for the info. will check on that. On Wed, May 4, 2016 at 12:22 AM, Francis Daly wrote: > On Mon, May 02, 2016 at 05:48:12PM +0500, Muhammad Yousuf Khan wrote: > > Hi there, > > > Thanks for the reply, actually we have 5 categories. and then we have > > hundred plus post in each category. now when user click on our category. > he > > sees intro about the category and then list of all posts. > > when user click on catagory it open up like this > > https://xxxx.com/catagory/ > > It sounds to me like that is an internal-to-wordpress thing, so you > may be better off removing any clever nginx config and just configure > wordpress to create the links that you want it to create. > > (I do not know if it possible to configure wordpress like that.) > > > and when post is clicked it opens up like this > > https://xxxx.com/catagory/posts.html > > > > but now what client want is when some one open up the category it should > > open up like this. > > https://xxxx.com/catagory (with out trailing slash) > > When someone clicks on catagory, they are following a link in whatever > html was returned from their previous request. If that link has a trailing > slash, they'll ask for the trailing slash. > > If the content on the catagory page contains links like "post1.html", > then you will need the trailing slash -- or you will need to change > the content to be of the form "catagory/post1.html". That should all be > configurable within wordpress, if anywhere. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed May 4 13:40:48 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 4 May 2016 16:40:48 +0300 Subject: Shared Memory Structure? In-Reply-To: <1a2ecbdac1bc62369aa163b48eca145a.NginxMailingListEnglish@forum.nginx.org> References: <1a2ecbdac1bc62369aa163b48eca145a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160504134048.GU36620@mdounin.ru> Hello! On Tue, May 03, 2016 at 05:45:05AM -0400, ben5192 wrote: > I am storing things in a shared memory zone allocated with > ngx_shared_memory_add and allocating each slab with ngx_slab_alloc_locked > (as these items don?t need to be locked particularly). The problem I'm > having is that I add the shared memory with exactly the amount I need. > However, the final large thing to be allocated fails, but after this a lot > more smaller things are also allocated without any problem. There is enough > space for it but it has a problem with a large chunk when space is getting > tight. > Does anyone know why this is or how to fix it? My guess was that nginx > structures the shared memory in a way that means there is enough memory > (which I have made sure there is) but it is not continuous. > Thanks. Shared memory size as specified in a ngx_shared_memory_add() call is the total size of the memory region to be allocated by nginx. As long as you use it with slab allocator - various slab allocator structures are allocated in this memory region, starting with the ngx_slab_pool_t structure, and followed by multiple ngx_slab_page_t structures. If you want to use shared memory for an allocation with a known size, you have to add some memory to account these internal structures. Note that it may be not trivial to calculate how may extra memory you'll need - see the slab allocator code for details. Adding at least 8 * ngx_pagesize should be a good idea. -- Maxim Dounin http://nginx.org/ From vbart at nginx.com Wed May 4 13:46:46 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 04 May 2016 16:46:46 +0300 Subject: (52) Empty reply from server In-Reply-To: References: <3612629.cEpatnRgP5@vbart-workstation> Message-ID: <1927484.Lg9YNvfiLY@vbart-workstation> On Wednesday 27 April 2016 21:51:33 Pankaj Chaudhary wrote: > Hi, > Thank you, > I got this point. > But in my case i need to set cookie value in header later read from header > the same value. > Is any example which i can follow for my requirement, can you suggest > please. > I believe there are no examples for your requirement, since it's not something that is expected to be done with headers. As I've already wrote before, saving and accessing the same value from headers structure is a bad idea. I don't understand the reason why someone ever wants to access his own value such a strange way. Maybe if you clarify what the purpose of your module is, a better solution will be found. wbr, Valentin V. Bartenev From paulo.leal at gmail.com Wed May 4 21:25:01 2016 From: paulo.leal at gmail.com (Paulo Leal) Date: Wed, 4 May 2016 18:25:01 -0300 Subject: openshift-nginx docker image running as non-root Message-ID: Hi, I have been playing around with the https://github.com/nginxinc/openshift-nginx dockerfile and trying to find a way to run run nginx as non-root with openshift/k8/docker. I am currently getting the error: nginx: [alert] could not open error log file: open() "/var/log/nginx/error.log" failed (13: Permission denied) 2016/05/04 20:51:09 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:5 2016/05/04 20:51:09 [emerg] 1#1: open() "/etc/nginx/conf.d/default.conf" failed (13: Permission denied) in /etc/nginx/nginx.conf:33 I have alredy added to my Dockerfile: Run ... && chmod 777 /etc/nginx/nginx.conf \ && chmod 777 /var/run \ && chmod 777 /etc/nginx/conf.d/default.conf I also run bash on the container and was albe to "cat" the "default.conf" and the "nginx.conf" files. I am not sure if its the correct place to put this question, but I got here from http://mailman.nginx.org/pipermail/nginx-devel/2015-November/007511.html (as you can notice, I have copied the beginning of the email from Scott?s email. Hope you don?t mind!=o) ) Best regards, Paulo Leal -------------- next part -------------- An HTML attachment was scrubbed... URL: From joyce at joycebabu.com Wed May 4 21:26:27 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Thu, 5 May 2016 02:56:27 +0530 Subject: Rewrite before regex location Message-ID: I am trying to migrate to Nginx + PHP-FPM from Apache + mod_php. I am currently using mod_rewrite to rewrite some virtual URIs ending in .php to actual PHP scripts. This works perfectly when using mod_php. But with with Nginx + FPM, since we have to use proxy_pass, this is not working. When I add a regex location block to match .php extension, it gets higher precedence, and my rewrite rules are not applied. How can I resolve this? location /test/ { rewrite "^/test/([a-z]+).php$" test.php?q=$1 last; } location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^(.+?\.php)(/.*)$; set $fastcgi_script_name_custom $fastcgi_script_name; if (!-f $document_root$fastcgi_script_name) { set $fastcgi_script_name_custom "/cms/index.php"; } fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params;} Thank, Joyce Babu NB: This is a sample configuration. There are several such rewrite rules, hence I cannot use a exact match or regular expression location block to override the php proxy location block. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed May 4 21:50:42 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 4 May 2016 22:50:42 +0100 Subject: openshift-nginx docker image running as non-root In-Reply-To: References: Message-ID: <20160504215042.GX9435@daoine.org> On Wed, May 04, 2016 at 06:25:01PM -0300, Paulo Leal wrote: Hi there, Completely untested by me; and I've not used openshift or docker, but: > I have been playing around with the > https://github.com/nginxinc/openshift-nginx dockerfile and trying to find > a way to run run nginx as non-root with openshift/k8/docker. > I am currently getting the error: > nginx: [alert] could not open error log file: open() > "/var/log/nginx/error.log" failed (13: Permission denied) That says that the user you run as cannot open that file. ls -ld / /var /var/log /var/log/nginx ls -l /var/log/nginx/error.log You may need a "-Z" in there too, if you have some extra security enabled. Does your user have permission to write the current error.log file; or to create a new one? If not, do whatever it takes to make that possible. You do mention some "chmod" commands below, but none that refer to this directory or file. > 2016/05/04 20:51:09 [warn] 1#1: the "user" directive makes sense only if > the master process runs with super-user privileges, ignored in > /etc/nginx/nginx.conf:5 That is harmless; if you intend to run as non-root, you can remove that directive from the config file. > 2016/05/04 20:51:09 [emerg] 1#1: open() "/etc/nginx/conf.d/default.conf" > failed (13: Permission denied) in /etc/nginx/nginx.conf:33 That suggests that your user can read /etc/nginx/nginx.conf, but cannot read /etc/nginx/conf.d/default.conf "ls -ld" or "ls -ldZ" every directory from the root to that one. Perhaps there is something there that shows why you are blocked. > I have alredy added to my Dockerfile: > Run ... > && chmod 777 /etc/nginx/nginx.conf \ > && chmod 777 /var/run \ > && chmod 777 /etc/nginx/conf.d/default.conf 777 is possibly excessive; but if it works for you, it works. If you don't have "x" permissions on /etc/nginx/conf.d, though, you probably won't be able to read the default.conf file within. > I also run bash on the container and was albe to "cat" the "default.conf" > and the "nginx.conf" files. Do you do that as the same user/group that you run nginx as? Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 4 22:00:18 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 4 May 2016 23:00:18 +0100 Subject: Rewrite before regex location In-Reply-To: References: Message-ID: <20160504220018.GY9435@daoine.org> On Thu, May 05, 2016 at 02:56:27AM +0530, Joyce Babu wrote: Hi there, > When I add a > regex location block to match .php extension, it gets higher precedence, > and my rewrite rules are not applied. > > How can I resolve this? > > location /test/ { > rewrite "^/test/([a-z]+).php$" test.php?q=$1 last; } Possibly using "location ^~ /test/" would work? http://nginx.org/r/location You may want to rewrite to /test.php (with the leading slash), though. Although, better might be to just fastcgi_pass directly, rather than rewriting. Something like (untested) location ^~ /test/ { fastcgi_param SCRIPT_FILENAME $document_root/test.php; fastcgi_param QUERY_STRING q=$uri; include fastcgi_params; fastcgi_pass 127.0.0.1:9000; } although you would want to change the QUERY_STRING line to match what you need; and you may need to switch the position of the "include", depending on how your fastcgi server handles repeated params. > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > > set $fastcgi_script_name_custom $fastcgi_script_name; > if (!-f $document_root$fastcgi_script_name) { > set $fastcgi_script_name_custom "/cms/index.php"; > } I suspect that it should be possible to do what you want to do there, with a "try_files". But I do not know the details. Good luck with it, f -- Francis Daly francis at daoine.org From joyce at joycebabu.com Wed May 4 22:52:11 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Thu, 5 May 2016 04:22:11 +0530 Subject: Rewrite before regex location In-Reply-To: <20160504220018.GY9435@daoine.org> References: <20160504220018.GY9435@daoine.org> Message-ID: Hi Francis, Thank you for the response. Possibly using "location ^~ /test/" would work? > > http://nginx.org/r/location > > You may want to rewrite to /test.php (with the leading slash), though. > > Although, better might be to just fastcgi_pass directly, rather than > rewriting. > > Something like (untested) > > location ^~ /test/ { > fastcgi_param SCRIPT_FILENAME $document_root/test.php; > fastcgi_param QUERY_STRING q=$uri; > include fastcgi_params; > fastcgi_pass 127.0.0.1:9000; > } > > although you would want to change the QUERY_STRING line to match what > you need; and you may need to switch the position of the "include", > depending on how your fastcgi server handles repeated params. > There are over 300 rewrites under 54 location blocks. I tried using ^~ as you suggested. Now the rewrite is working correctly, but the files are not executed. The request is returning the actual PHP source file, not the HTML generated by executing the script. > > location ~ [^/]\.php(/|$) { > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > > > > set $fastcgi_script_name_custom $fastcgi_script_name; > > if (!-f $document_root$fastcgi_script_name) { > > set $fastcgi_script_name_custom "/cms/index.php"; > > } > > I suspect that it should be possible to do what you want to do there, > with a "try_files". But I do not know the details. > There is a CMS engine which will intercept all unmatched requests and check the database to see if there is an article with that URI. Some times it has to match existing directories without index.php. If I use try_files, it will either lead to a 403 error (if no index is specified), or would internally redirect the request to the index file (if it is specified), leading to 404 error. The if condition correctly handles all the non-existing files. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed May 4 23:15:34 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 5 May 2016 00:15:34 +0100 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> Message-ID: <20160504231534.GZ9435@daoine.org> On Thu, May 05, 2016 at 04:22:11AM +0530, Joyce Babu wrote: Hi there, > > Possibly using "location ^~ /test/" would work? > There are over 300 rewrites under 54 location blocks. If you've got a messy config with no common patterns, you've got a messy config with no common patterns, and there's not much you can do about it. If you can find common patterns, maybe you can make the config more maintainable (read: no top-level regex locations); but you don't want to break previously-working urls. > I tried using ^~ as you suggested. Now the rewrite is working correctly, > but the files are not executed. The request is returning the actual PHP > source file, not the HTML generated by executing the script. Can you show one configuration that leads to the php content being returned? If you rewrite /test/x.php to /test.php, /test.php should be handled in the "~ php" location. An alternative possibility could be to put these rewrites at server level rather than inside location blocks. That is unlikely to be great for efficiency; but only you can judge whether it could be adequate. > > > location ~ [^/]\.php(/|$) { > > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > > > > > > set $fastcgi_script_name_custom $fastcgi_script_name; > > > if (!-f $document_root$fastcgi_script_name) { > > > set $fastcgi_script_name_custom "/cms/index.php"; > > > } > > > > I suspect that it should be possible to do what you want to do there, > > with a "try_files". But I do not know the details. > > There is a CMS engine which will intercept all unmatched requests and check > the database to see if there is an article with that URI. Some times it has > to match existing directories without index.php. If I use try_files, it > will either lead to a 403 error (if no index is specified), or would > internally redirect the request to the index file (if it is specified), > leading to 404 error. The if condition correctly handles all the > non-existing files. There is more than one possible try_files configuration; but that does not matter: if you have a system that works for you, you can keep using it. Good luck with it, f -- Francis Daly francis at daoine.org From joyce at joycebabu.com Wed May 4 23:43:29 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Thu, 5 May 2016 05:13:29 +0530 Subject: Rewrite before regex location In-Reply-To: <20160504231534.GZ9435@daoine.org> References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> Message-ID: > > If you've got a messy config with no common patterns, you've got a messy > config with no common patterns, and there's not much you can do about it. > > If you can find common patterns, maybe you can make the config more > maintainable (read: no top-level regex locations); but you don't want > to break previously-working urls. > The site was initially using Apache + mod_php. Hence these ere not an issue. It was only when I tried to migrate to PHP-FPM, I realized the mistakes. Now the urls cannot be chanced due to SEO implications. > > > I tried using ^~ as you suggested. Now the rewrite is working correctly, > > but the files are not executed. The request is returning the actual PHP > > source file, not the HTML generated by executing the script. > > Can you show one configuration that leads to the php content being > returned? > > If you rewrite /test/x.php to /test.php, /test.php should be handled in > the "~ php" location. > I am sorry, I did not rewrite it to a location outside /test/, which was why the file content was being returned. Is it possible to do something like this? location /test/ { rewrite "^/test/([a-z]+).php$" /php-fpm/test/test.php?q=$1 last; } location ~ ^/php-fpm/ { location ~ [^/]\.php(/|$) { fastcgi_split_path_info ^/php-fpm(.+?\.php)(/.*)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include fastcgi_params; } } What I have tried to do here is rewrite to add a special prefix (/php-fpm) to the rewritten urls. and nest the php location block within it. Then use fastcgi_split_path_info to create new $fastcgi_script_name without the special prefix. I tried the above code, but it is not working. fastcgi_split_path_info is not generating $fastcgi_script_name without the /php-fpm prefix. > An alternative possibility could be to put these rewrites at server > level rather than inside location blocks. That is unlikely to be great > for efficiency; but only you can judge whether it could be adequate. > > > > > location ~ [^/]\.php(/|$) { > > > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > > > > > > > > set $fastcgi_script_name_custom $fastcgi_script_name; > > > > if (!-f $document_root$fastcgi_script_name) { > > > > set $fastcgi_script_name_custom "/cms/index.php"; > > > > } > > > > > > I suspect that it should be possible to do what you want to do there, > > > with a "try_files". But I do not know the details. > > > > There is a CMS engine which will intercept all unmatched requests and > check > > the database to see if there is an article with that URI. Some times it > has > > to match existing directories without index.php. If I use try_files, it > > will either lead to a 403 error (if no index is specified), or would > > internally redirect the request to the index file (if it is specified), > > leading to 404 error. The if condition correctly handles all the > > non-existing files. > > There is more than one possible try_files configuration; but that does not > matter: if you have a system that works for you, you can keep using it. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu May 5 02:22:07 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 5 May 2016 07:52:07 +0530 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> Message-ID: Hi , Can you try putting the rewrite in server{} block outside of all location{} blocks like rewrite "^/test/([a-z]+).php$" /test/test.php?q=$1 last; On Thu, May 5, 2016 at 5:13 AM, Joyce Babu wrote: >> If you've got a messy config with no common patterns, you've got a messy >> config with no common patterns, and there's not much you can do about it. >> >> If you can find common patterns, maybe you can make the config more >> maintainable (read: no top-level regex locations); but you don't want >> to break previously-working urls. > > > The site was initially using Apache + mod_php. Hence these ere not an issue. > It was only when > I tried to migrate to PHP-FPM, I realized the mistakes. Now the urls cannot > be chanced due to > SEO implications. > >> >> >> > I tried using ^~ as you suggested. Now the rewrite is working correctly, >> > but the files are not executed. The request is returning the actual PHP >> > source file, not the HTML generated by executing the script. >> >> Can you show one configuration that leads to the php content being >> returned? >> >> If you rewrite /test/x.php to /test.php, /test.php should be handled in >> the "~ php" location. > > > I am sorry, I did not rewrite it to a location outside /test/, which was why > the file content was being returned. > > Is it possible to do something like this? > > location /test/ { > rewrite "^/test/([a-z]+).php$" /php-fpm/test/test.php?q=$1 last; > } > > location ~ ^/php-fpm/ { > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^/php-fpm(.+?\.php)(/.*)$; > > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > include fastcgi_params; > } > } > > > What I have tried to do here is rewrite to add a special prefix (/php-fpm) > to the rewritten urls. and nest the php location block within it. Then use > fastcgi_split_path_info to create new $fastcgi_script_name without the > special prefix. I tried the above code, but it is not working. > fastcgi_split_path_info is not generating $fastcgi_script_name without the > /php-fpm prefix. > > >> >> An alternative possibility could be to put these rewrites at server >> level rather than inside location blocks. That is unlikely to be great >> for efficiency; but only you can judge whether it could be adequate. >> >> > > > location ~ [^/]\.php(/|$) { >> > > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; >> > > > >> > > > set $fastcgi_script_name_custom $fastcgi_script_name; >> > > > if (!-f $document_root$fastcgi_script_name) { >> > > > set $fastcgi_script_name_custom "/cms/index.php"; >> > > > } >> > > >> > > I suspect that it should be possible to do what you want to do there, >> > > with a "try_files". But I do not know the details. >> > >> > There is a CMS engine which will intercept all unmatched requests and >> > check >> > the database to see if there is an article with that URI. Some times it >> > has >> > to match existing directories without index.php. If I use try_files, it >> > will either lead to a 403 error (if no index is specified), or would >> > internally redirect the request to the index file (if it is specified), >> > leading to 404 error. The if condition correctly handles all the >> > non-existing files. >> >> There is more than one possible try_files configuration; but that does not >> matter: if you have a system that works for you, you can keep using it. >> >> Good luck with it, >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Anoop P Alias From pankajitbhu at gmail.com Thu May 5 04:57:46 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Thu, 5 May 2016 10:27:46 +0530 Subject: (52) Empty reply from server In-Reply-To: <1927484.Lg9YNvfiLY@vbart-workstation> References: <3612629.cEpatnRgP5@vbart-workstation> <1927484.Lg9YNvfiLY@vbart-workstation> Message-ID: Hi, thank you! My module is basically for resource protection. I have already running my module on other servers also. My module follow below steps. -Generate cookie and write in response header -read from header when cookie is needed IIS have API setheader() to set header and getheader() to get header. Apache have apr_table_get() to get value from header and apr_table_set() to set the value in header. Do we have similar APIs in nginx? How i can achieve same kind of behavior in nginx? Regards, Pankaj On Wed, May 4, 2016 at 7:16 PM, Valentin V. Bartenev wrote: > On Wednesday 27 April 2016 21:51:33 Pankaj Chaudhary wrote: > > Hi, > > Thank you, > > I got this point. > > But in my case i need to set cookie value in header later read from > header > > the same value. > > Is any example which i can follow for my requirement, can you suggest > > please. > > > > I believe there are no examples for your requirement, since it's not > something that is expected to be done with headers. > > As I've already wrote before, saving and accessing the same value from > headers structure is a bad idea. I don't understand the reason why > someone ever wants to access his own value such a strange way. > > Maybe if you clarify what the purpose of your module is, a better solution > will be found. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 5 05:51:07 2016 From: nginx-forum at forum.nginx.org (kostbad) Date: Thu, 05 May 2016 01:51:07 -0400 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: <20160504122758.GT36620@mdounin.ru> References: <20160504122758.GT36620@mdounin.ru> Message-ID: <5692117c18145dd9f6c505a79d05d10f.NginxMailingListEnglish@forum.nginx.org> Sorry, this is the output i get: Thanks, it's probably time to update my system. nginx version: nginx/1.2.6 built by gcc 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) TLS SNI support enabled configure arguments: --prefix=/etc/nginx/ --sbin-path=/usr/sbin/nginx --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 --with-cc-opt='-O2 -g' Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266590,266637#msg-266637 From christianivanov2016 at gmail.com Thu May 5 08:25:18 2016 From: christianivanov2016 at gmail.com (Christian Ivanov) Date: Thu, 5 May 2016 11:25:18 +0300 Subject: Replace apache with nginx Message-ID: Hello there. I want to migrate my website only on nginx. I setup everything, but have issue with my htaccess. Here is the file: [root at server]# cat .htaccess RewriteEngine On RewriteRule ^audio-(.*)\.html$ %{ENV:BASE}/audio_redirect_mask.php [L] RewriteRule ^s/(.*)$ %{ENV:BASE}/audio_redirect_source.php?u=$1 [L] [root at server]# Somebody to have good idea, how can I replace this htaccess and execute rewrite, without mod_php with apache or php-fpm? Here is the source of both PHP files: ------------------------------------------------------------------------------------------------------------ [root at server]# cat audio_redirect_mask.php From nginx-forum at forum.nginx.org Thu May 5 10:35:57 2016 From: nginx-forum at forum.nginx.org (supersnd) Date: Thu, 05 May 2016 06:35:57 -0400 Subject: nginx wrong with systemd Message-ID: <5a09436a25fe7f9f483d3f8e08df4ceb.NginxMailingListEnglish@forum.nginx.org> default.conf location / { #root /usr/share/nginx/html; root /sdd/; #index index.html index.htm; autoindex on; [root at centos /]# /usr/sbin/nginx -c /etc/nginx/nginx.conf I can read the files [root at centos /]systemd start nginx 403 Forbidden nginx/1.9.15 I add User=root to nginx.serivce the question still have Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266640,266640#msg-266640 From nginx-forum at forum.nginx.org Thu May 5 12:12:24 2016 From: nginx-forum at forum.nginx.org (kostbad) Date: Thu, 05 May 2016 08:12:24 -0400 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de> References: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de> Message-ID: How do i update to the latest stable version of nginx on a CentOS 6.7 server? Are there any precautions before the update? Will if affect my current settings-conf ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266590,266641#msg-266641 From vbart at nginx.com Thu May 5 14:02:50 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Thu, 05 May 2016 17:02:50 +0300 Subject: (52) Empty reply from server In-Reply-To: References: <1927484.Lg9YNvfiLY@vbart-workstation> Message-ID: <3210444.EYQ3JPFaF7@vbart-workstation> On Thursday 05 May 2016 10:27:46 Pankaj Chaudhary wrote: > Hi, > > thank you! > My module is basically for resource protection. > I have already running my module on other servers also. > My module follow below steps. > -Generate cookie and write in response header > -read from header when cookie is needed > > IIS have API setheader() to set header and getheader() to get header. > Apache have apr_table_get() to get value from header and apr_table_set() to > set the value in header. > > Do we have similar APIs in nginx? > > How i can achieve same kind of behavior in nginx? > [..] If you want to set a response header, then should look to the source of ngx_http_headers_module, which is responsible for adding headers to output. For reading cookies you can look to the $http_cookie variable implementation. Please, use "grep" for search and read the code. You can implement your module a way easier if you just provide a directive, that can use the variable to read some value, and adds a variable to set the value. Then it can be used like this: protection_value $cookie_key; add_header Set-Cookie "key=$protection_value"; You can also check the ngx_http_secure_link_module: http://nginx.org/en/docs/http/ngx_http_secure_link_module.html wbr, Valentin V. Bartenev From luky-37 at hotmail.com Thu May 5 15:45:15 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 5 May 2016 17:45:15 +0200 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: References: <20160504092625.Horde.HCkp7tFL1u3nMLkYoM7a-ra@andreasschulze.de>, Message-ID: > nginx version: nginx/1.2.6 > built by gcc 4.4.4 20100726 (Red Hat 4.4.4-13) (GCC) > [...] > CentOS 6.7 server Try disabling kerberos cipher suites [1], you may be hitting some obscure CentOS/RedHat libc issues [2]. [1] https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=nginx-1.2.6&openssl=1.0.1e&hsts=no&profile=old [2] http://blog.tinola.com/?e=36 From al-nginx at none.at Thu May 5 15:57:07 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Thu, 05 May 2016 17:57:07 +0200 Subject: openshift-nginx docker image running as non-root In-Reply-To: <20160504215042.GX9435@daoine.org> References: <20160504215042.GX9435@daoine.org> Message-ID: <5f55418dff83d1e7b286310f2c205558@none.at> Hi. Am 04-05-2016 23:50, schrieb Francis Daly: > On Wed, May 04, 2016 at 06:25:01PM -0300, Paulo Leal wrote: > > Hi there, > > Completely untested by me; and I've not used openshift or docker, but: > >> I have been playing around with the >> https://github.com/nginxinc/openshift-nginx dockerfile and trying to >> find >> a way to run run nginx as non-root with openshift/k8/docker. >> >> I am currently getting the error: >> nginx: [alert] could not open error log file: open() >> "/var/log/nginx/error.log" failed (13: Permission denied) > > That says that the user you run as cannot open that file. > > ls -ld / /var /var/log /var/log/nginx > ls -l /var/log/nginx/error.log > > You may need a "-Z" in there too, if you have some extra security > enabled. > > Does your user have permission to write the current error.log file; > or to create a new one? If not, do whatever it takes to make that > possible. > > You do mention some "chmod" commands below, but none that refer to this > directory or file. In openshift you normally not know with which user your run. https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#scc-strategies I think the default is 'MustRunAsRange', this suggest this file. https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_master/templates/master.yaml.v1.j2#L177 There is a solution to run for a dedicated user id. https://docs.openshift.org/latest/creating_images/guidelines.html#use-uid You should change the location of the pid file and you can use a syslog server for the logs. I have created a more or less ready to use solution. https://github.com/git001/nginx-osev3 Please tell me if the solution is helpful for you. I can then make a pull request to the https://github.com/nginxinc/openshift-nginx . >> I have alredy added to my Dockerfile: >> Run ... >> && chmod 777 /etc/nginx/nginx.conf \ >> && chmod 777 /var/run \ >> && chmod 777 /etc/nginx/conf.d/default.conf > > 777 is possibly excessive; but if it works for you, it works. If you > don't have "x" permissions on /etc/nginx/conf.d, though, you probably > won't be able to read the default.conf file within. > >> I also run bash on the container and was albe to "cat" the >> "default.conf" >> and the "nginx.conf" files. > > Do you do that as the same user/group that you run nginx as? To OP: the output of ' id && ps axfu && ls -laR /etc/nginx/ ' would be interesting. In general the Images in openshift are running with a random user id which it makes difficult to set proper file permissions :-/ You can define some service accounts to be able to run as root, this should be used very carefully as in non PaaS environments ;-). Cheers Aleks From francis at daoine.org Thu May 5 16:44:24 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 5 May 2016 17:44:24 +0100 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> Message-ID: <20160505164424.GC9435@daoine.org> On Thu, May 05, 2016 at 05:13:29AM +0530, Joyce Babu wrote: Hi there, > Is it possible to do something like this? > > location /test/ { > rewrite "^/test/([a-z]+).php$" /php-fpm/test/test.php?q=$1 last; > } > > location ~ ^/php-fpm/ { > location ~ [^/]\.php(/|$) { > fastcgi_split_path_info ^/php-fpm(.+?\.php)(/.*)$; > > fastcgi_pass 127.0.0.1:9000; > fastcgi_index index.php; > include fastcgi_params; > } > } > > > What I have tried to do here is rewrite to add a special prefix (/php-fpm) > to the rewritten urls. and nest the php location block within it. Then use > fastcgi_split_path_info to create new $fastcgi_script_name without the > special prefix. I tried the above code, but it is not working. > fastcgi_split_path_info is not generating $fastcgi_script_name without the > /php-fpm prefix. That's because your fastcgi_split_path_info pattern does not match - .php is not followed by / in your rewritten url. Because of the location{} you are in, it is probably simplest to just replace the second capture part of that pattern with (.*)$. Cheers, f -- Francis Daly francis at daoine.org From paulo.leal at gmail.com Thu May 5 17:14:05 2016 From: paulo.leal at gmail.com (Paulo Leal) Date: Thu, 5 May 2016 14:14:05 -0300 Subject: openshift-nginx docker image running as non-root In-Reply-To: <5f55418dff83d1e7b286310f2c205558@none.at> References: <20160504215042.GX9435@daoine.org> <5f55418dff83d1e7b286310f2c205558@none.at> Message-ID: Hi, I added the lines to my dockerfile Run ... && chmod 777 /var/log/nginx / && rm -rf /var/log/nginx/error.log / && rm -rf /var/log/nginx/access.log It worked for me! Thanks for your help. Paulo Leal On Thu, May 5, 2016 at 12:57 PM, Aleksandar Lazic wrote: > Hi. > > Am 04-05-2016 23:50, schrieb Francis Daly: > >> On Wed, May 04, 2016 at 06:25:01PM -0300, Paulo Leal wrote: >> >> Hi there, >> >> Completely untested by me; and I've not used openshift or docker, but: >> >> I have been playing around with the >>> https://github.com/nginxinc/openshift-nginx dockerfile and trying to >>> find >>> a way to run run nginx as non-root with openshift/k8/docker. >>> >>> I am currently getting the error: >>> nginx: [alert] could not open error log file: open() >>> "/var/log/nginx/error.log" failed (13: Permission denied) >>> >> >> That says that the user you run as cannot open that file. >> >> ls -ld / /var /var/log /var/log/nginx >> ls -l /var/log/nginx/error.log >> >> You may need a "-Z" in there too, if you have some extra security enabled. >> >> Does your user have permission to write the current error.log file; >> or to create a new one? If not, do whatever it takes to make that >> possible. >> >> You do mention some "chmod" commands below, but none that refer to this >> directory or file. >> > > In openshift you normally not know with which user your run. > > > https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#scc-strategies > > I think the default is 'MustRunAsRange', this suggest this file. > > > https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_master/templates/master.yaml.v1.j2#L177 > > There is a solution to run for a dedicated user id. > > https://docs.openshift.org/latest/creating_images/guidelines.html#use-uid > > You should change the location of the pid file and you can use a syslog > server for the logs. I have created a more or less ready to use solution. > > https://github.com/git001/nginx-osev3 > > Please tell me if the solution is helpful for you. > > I can then make a pull request to the > https://github.com/nginxinc/openshift-nginx . > > I have alredy added to my Dockerfile: >>> Run ... >>> && chmod 777 /etc/nginx/nginx.conf \ >>> && chmod 777 /var/run \ >>> && chmod 777 /etc/nginx/conf.d/default.conf >>> >> >> 777 is possibly excessive; but if it works for you, it works. If you >> don't have "x" permissions on /etc/nginx/conf.d, though, you probably >> won't be able to read the default.conf file within. >> >> I also run bash on the container and was albe to "cat" the "default.conf" >>> and the "nginx.conf" files. >>> >> >> Do you do that as the same user/group that you run nginx as? >> > > To OP: > the output of ' id && ps axfu && ls -laR /etc/nginx/ ' would be > interesting. > > In general the Images in openshift are running with a random user id which > it makes difficult to set proper file permissions :-/ > You can define some service accounts to be able to run as root, this > should be used very carefully as in non PaaS environments ;-). > > Cheers > Aleks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiangmuhui at gmail.com Thu May 5 17:57:41 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Fri, 6 May 2016 01:57:41 +0800 Subject: HTTP2 Multiplexing In-Reply-To: <8566823.0OJPyq7u0f@vbart-laptop> References: <9154086.t1Y2UnK1lN@vbart-laptop> <8566823.0OJPyq7u0f@vbart-laptop> Message-ID: Hi Thanks Valentin. I've know the principle Best Regards Muhui Jiang 2016-05-04 19:18 GMT+08:00 Valentin V. Bartenev : > On Wednesday 04 May 2016 18:50:44 Muhui Jiang wrote: > > Hi > > > > >Nginx allows multiple request and responses in multiple connections > using > > >HTTP/1.x as well. HTTP/2 changes nothing here (except it uses only one > > >connection, but it's not important from the basic architecture point of > > >view). > > > > If so, it seems there is no difference or improvement of the > implementation > > on the feature of multiplexing compared with Http/1.1 pipeline. > > > > How do you solve the problem Head-of-line blocking occurred in Http/1.1. > > > [..] > > You have mixed multiplexing of data transferring over single connection, > which is the feature of HTTP/2, with multiplexing of requests processing > inside one process, which is the feature of nginx. > > These things have no relation to each other. > > Please, read the article I've pointed out, nginx doesn't need separate > threads to process something in parallel. > > The Head-of-line blocking problem in HTTP/1.1 is presented only if you > use limited number of connections, and HTTP/2 solves it in some degree > but puts the limit and the problem on different level. > > With HTTP/2 now you have limit of virtual streams inside HTTP/2 > connection and the HOL problem on TCP level. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 6 02:37:58 2016 From: nginx-forum at forum.nginx.org (bossfrog) Date: Thu, 05 May 2016 22:37:58 -0400 Subject: proxy cache + pseudo-streaming for mp4/flv Message-ID: Hi guys! In order to make the `proxy cache` and `pseudo-streaming for mp4/flv` work together, I do a rough hacking on nginx 1.9.12 (there is my patch: https://pastebin.mozilla.org/8869689). It works well apparently, but I wonder that if I did it right? or I was taking a risk? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266660,266660#msg-266660 From nginx-forum at forum.nginx.org Fri May 6 03:02:20 2016 From: nginx-forum at forum.nginx.org (supersnd) Date: Thu, 05 May 2016 23:02:20 -0400 Subject: nginx wrong with systemd In-Reply-To: <5a09436a25fe7f9f483d3f8e08df4ceb.NginxMailingListEnglish@forum.nginx.org> References: <5a09436a25fe7f9f483d3f8e08df4ceb.NginxMailingListEnglish@forum.nginx.org> Message-ID: question is on selinux, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266640,266661#msg-266661 From reallfqq-nginx at yahoo.fr Fri May 6 11:43:20 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 6 May 2016 13:43:20 +0200 Subject: Replace apache with nginx In-Reply-To: References: Message-ID: There is plenty of information around here for you to start. Instead of coming here for pre-cooked recipes, you should show you at least tried... You could read the docs or some tutorials from quality sources . Since I am in a good mood, Apache redirect rule #2 *might* (note emphasis) transform into this: location ~* ^/s/(?.*)$ { return 301 /audio_redirect_source.php?u=$source; } ?Why are you providing backend code again?? It really feels like 'I do not understand how the whole thing works and/or I have not thought about that stuff at all, please do it for me'. --- *B. R.* On Thu, May 5, 2016 at 10:25 AM, Christian Ivanov < christianivanov2016 at gmail.com> wrote: > Hello there. I want to migrate my website only on nginx. I setup > everything, but have issue with my htaccess. > > Here is the file: > > [root at server]# cat .htaccess > > RewriteEngine On > RewriteRule ^audio-(.*)\.html$ %{ENV:BASE}/audio_redirect_mask.php [L] > RewriteRule ^s/(.*)$ %{ENV:BASE}/audio_redirect_source.php?u=$1 [L] > > [root at server]# > > Somebody to have good idea, how can I replace this htaccess and execute > rewrite, without mod_php with apache or php-fpm? > > Here is the source of both PHP files: > > ------------------------------------------------------------------------------------------------------------ > > [root at server]# cat audio_redirect_mask.php > > if (empty($_GET['j'])) { > $url = '/'; > } else { > $str = str_replace(array('-', '_'), array('/', '+'), $_GET['j']); > $url = '/s/' . base64_encode(openssl_decrypt($str, 'AES-256-CBC', > 'somesecretpassword2222', 0, '1234567891011121000')); > } > > header("location: $url", true, 302); > > ------------------------------------------------------------------------------------------------------------ > > [root at server]# cat audio_redirect_source.php > > if ($_GET['u']) { > header("X-Robots-Tag: noindex", true); > header('Location: ' . base64_decode($_GET['u']) , true, 302); > } else { > header("X-Robots-Tag: noindex", true); > header("location: /"); > > ------------------------------------------------------------------------------------------------------------ > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Fri May 6 11:51:46 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Fri, 6 May 2016 13:51:46 +0200 Subject: Rewrite before regex location In-Reply-To: <20160505164424.GC9435@daoine.org> References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> <20160505164424.GC9435@daoine.org> Message-ID: As a sidenote, why using location ~ ^/php-fpm/ and not location /php-fpm/ ? Although being distant to your case (I see others are helping you nicely :o) ), I can but particularly note & enjoy the use of a non-greedy modifier in the regex part of fastcgi_split_path_info intended to result in $fastcgi_script_name. That detail means a lot. Keep up the good work :o) ?? --- *B. R.* On Thu, May 5, 2016 at 6:44 PM, Francis Daly wrote: > On Thu, May 05, 2016 at 05:13:29AM +0530, Joyce Babu wrote: > > Hi there, > > > Is it possible to do something like this? > > > > location /test/ { > > rewrite "^/test/([a-z]+).php$" /php-fpm/test/test.php?q=$1 last; > > } > > > > location ~ ^/php-fpm/ { > > location ~ [^/]\.php(/|$) { > > fastcgi_split_path_info ^/php-fpm(.+?\.php)(/.*)$; > > > > fastcgi_pass 127.0.0.1:9000; > > fastcgi_index index.php; > > include fastcgi_params; > > } > > } > > > > > > What I have tried to do here is rewrite to add a special prefix > (/php-fpm) > > to the rewritten urls. and nest the php location block within it. Then > use > > fastcgi_split_path_info to create new $fastcgi_script_name without the > > special prefix. I tried the above code, but it is not working. > > fastcgi_split_path_info is not generating $fastcgi_script_name without > the > > /php-fpm prefix. > > That's because your fastcgi_split_path_info pattern does not match - > .php is not followed by / in your rewritten url. > > Because of the location{} you are in, it is probably simplest to just > replace the second capture part of that pattern with (.*)$. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Fri May 6 12:10:39 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Fri, 06 May 2016 14:10:39 +0200 Subject: openshift-nginx docker image running as non-root In-Reply-To: References: <20160504215042.GX9435@daoine.org> <5f55418dff83d1e7b286310f2c205558@none.at> Message-ID: Am 05-05-2016 19:14, schrieb Paulo Leal: > Hi, > > I added the lines to my dockerfile > > Run ... > && chmod 777 /var/log/nginx / > > && rm -rf /var/log/nginx/error.log / > && rm -rf /var/log/nginx/access.log > > It worked for me! When you want to read the files you call ### oc rsh cat /var/log/nginx/error.log ### or something similar, right. Would it not be nicer to get it via oc logs -f oc logs -f I think for POC (Prove of concept) the local logs setup woks but when you want more production readiness you should consider to log to a syslog server. IMHO. Maybe you can reuse the aggregating logs setup from openshift. https://docs.openshift.org/latest/install_config/aggregate_logging.html with http://docs.fluentd.org/articles/in_syslog or something similar. Cheers Aleks > Thanks for your help. > > Paulo Leal > > On Thu, May 5, 2016 at 12:57 PM, Aleksandar Lazic > wrote: > >> Hi. >> >> Am 04-05-2016 23:50, schrieb Francis Daly: >> On Wed, May 04, 2016 at 06:25:01PM -0300, Paulo Leal wrote: >> >> Hi there, >> >> Completely untested by me; and I've not used openshift or docker, but: >> >> I have been playing around with the >> https://github.com/nginxinc/openshift-nginx dockerfile and trying to >> find >> a way to run run nginx as non-root with openshift/k8/docker. >> >> I am currently getting the error: >> nginx: [alert] could not open error log file: open() >> "/var/log/nginx/error.log" failed (13: Permission denied) >> >> That says that the user you run as cannot open that file. >> >> ls -ld / /var /var/log /var/log/nginx >> ls -l /var/log/nginx/error.log >> >> You may need a "-Z" in there too, if you have some extra security >> enabled. >> >> Does your user have permission to write the current error.log file; >> or to create a new one? If not, do whatever it takes to make that >> possible. >> >> You do mention some "chmod" commands below, but none that refer to >> this >> directory or file. > > In openshift you normally not know with which user your run. > > https://docs.openshift.org/latest/architecture/additional_concepts/authorization.html#scc-strategies > > I think the default is 'MustRunAsRange', this suggest this file. > > https://github.com/openshift/openshift-ansible/blob/master/roles/openshift_master/templates/master.yaml.v1.j2#L177 > > There is a solution to run for a dedicated user id. > > https://docs.openshift.org/latest/creating_images/guidelines.html#use-uid > > You should change the location of the pid file and you can use a syslog > server for the logs. I have created a more or less ready to use > solution. > > https://github.com/git001/nginx-osev3 > > Please tell me if the solution is helpful for you. > > I can then make a pull request to the > https://github.com/nginxinc/openshift-nginx . > >>> I have alredy added to my Dockerfile: >>> Run ... >>> && chmod 777 /etc/nginx/nginx.conf \ >>> && chmod 777 /var/run \ >>> && chmod 777 /etc/nginx/conf.d/default.conf >> >> 777 is possibly excessive; but if it works for you, it works. If you >> don't have "x" permissions on /etc/nginx/conf.d, though, you probably >> won't be able to read the default.conf file within. >> >>> I also run bash on the container and was albe to "cat" the >>> "default.conf" >>> and the "nginx.conf" files. >> >> Do you do that as the same user/group that you run nginx as? > > To OP: > the output of ' id && ps axfu && ls -laR /etc/nginx/ ' would be > interesting. > > In general the Images in openshift are running with a random user id > which it makes difficult to set proper file permissions :-/ > You can define some service accounts to be able to run as root, this > should be used very carefully as in non PaaS environments ;-). > > Cheers > Aleks > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From christophe.bresso at laposte.net Fri May 6 13:10:37 2016 From: christophe.bresso at laposte.net (Christophe Bresso) Date: Fri, 6 May 2016 15:10:37 +0200 Subject: a strange log Message-ID: <572C97CD.9030605@laposte.net> Hi, I have a strange log from 169.229.3.91 (University of California at Berkeley ISTDATA (NET-169-229-0-0-2) 169.229.0.0 - 169.229.255.255). [06/May/2016:13:03:10 +0200] .... 5\xF5\x13\xED\xCA upstream_response_time - msec 1462532590.541 request_time 0.150 Is it usual to use command char in header ? Is this an exploit or something else ? Best regards Christophe From nginx-forum at forum.nginx.org Fri May 6 19:22:25 2016 From: nginx-forum at forum.nginx.org (Issam2204) Date: Fri, 06 May 2016 15:22:25 -0400 Subject: Nginx always searches index.php in the wrong directory Message-ID: Hello! I have a web root directory pointing to /var/www/html. I have several subfolders in /var/www/html where I host different projects. Example: /var/www/html/chatserver When I try to access http:// MYSERVER.net/chatserver I always have an error because Nginx tries to search for the index.php in the root web folder instead of the actual subfolder. This is my default server: server { listen 80; listen [::]:80; server_name MYSERVER.net www.MYSERVER.net; return 301 https://$host$request_uri; } server { listen 443 ssl; listen [::]:443 ssl; server_name MYSERVER.net www.MYSERVER.net; ssl_certificate /etc/letsencrypt/live/MYSERVER.net/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/MYSERVER.net/privkey.pem; ssl_trusted_certificate /etc/letsencrypt/live/MYSERVER.net/fullchain.pem; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_dhparam /etc/ssl/certs/dhparam.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_session_timeout 1d; ssl_session_cache shared:SSL:50m; ssl_stapling on; ssl_stapling_verify on; add_header Strict-Transport-Security max-age=15768000; resolver 8.8.8.8 8.8.4.4; root /var/www/html; index index.php index.html index.htm; rewrite ^(.*\.php)(/)(.*)$ $1?file=/$3 last; location / { # try_files $uri $uri/ =404; try_files $uri $uri/ /index.php?q=$uri&$args; } location ^~ /.well-known/ { allow all; } location ~ [^/]\.php(/|$) { fastcgi_index index.php; fastcgi_pass unix:/var/run/php5-fpm.sock; include fastcgi_params; fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_param PATH_INFO $fastcgi_path_info; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } } Any help/ideas on how to solve this? Thanks for reading! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266674,266674#msg-266674 From joyce at joycebabu.com Fri May 6 20:16:50 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Sat, 7 May 2016 01:46:50 +0530 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> Message-ID: Thank you for the suggestion, Anoop. I did not want to do that since it would be evaluated for every request. On Thu, May 5, 2016 at 7:52 AM, Anoop Alias wrote: > Hi , > > Can you try putting the rewrite in server{} block outside of all > location{} blocks like > > rewrite "^/test/([a-z]+).php$" /test/test.php?q=$1 last; > > > > On Thu, May 5, 2016 at 5:13 AM, Joyce Babu wrote: > >> If you've got a messy config with no common patterns, you've got a messy > >> config with no common patterns, and there's not much you can do about > it. > >> > >> If you can find common patterns, maybe you can make the config more > >> maintainable (read: no top-level regex locations); but you don't want > >> to break previously-working urls. > > > > > > The site was initially using Apache + mod_php. Hence these ere not an > issue. > > It was only when > > I tried to migrate to PHP-FPM, I realized the mistakes. Now the urls > cannot > > be chanced due to > > SEO implications. > > > >> > >> > >> > I tried using ^~ as you suggested. Now the rewrite is working > correctly, > >> > but the files are not executed. The request is returning the actual > PHP > >> > source file, not the HTML generated by executing the script. > >> > >> Can you show one configuration that leads to the php content being > >> returned? > >> > >> If you rewrite /test/x.php to /test.php, /test.php should be handled in > >> the "~ php" location. > > > > > > I am sorry, I did not rewrite it to a location outside /test/, which was > why > > the file content was being returned. > > > > Is it possible to do something like this? > > > > location /test/ { > > rewrite "^/test/([a-z]+).php$" /php-fpm/test/test.php?q=$1 last; > > } > > > > location ~ ^/php-fpm/ { > > location ~ [^/]\.php(/|$) { > > fastcgi_split_path_info ^/php-fpm(.+?\.php)(/.*)$; > > > > fastcgi_pass 127.0.0.1:9000; > > fastcgi_index index.php; > > include fastcgi_params; > > } > > } > > > > > > What I have tried to do here is rewrite to add a special prefix > (/php-fpm) > > to the rewritten urls. and nest the php location block within it. Then > use > > fastcgi_split_path_info to create new $fastcgi_script_name without the > > special prefix. I tried the above code, but it is not working. > > fastcgi_split_path_info is not generating $fastcgi_script_name without > the > > /php-fpm prefix. > > > > > >> > >> An alternative possibility could be to put these rewrites at server > >> level rather than inside location blocks. That is unlikely to be great > >> for efficiency; but only you can judge whether it could be adequate. > >> > >> > > > location ~ [^/]\.php(/|$) { > >> > > > fastcgi_split_path_info ^(.+?\.php)(/.*)$; > >> > > > > >> > > > set $fastcgi_script_name_custom $fastcgi_script_name; > >> > > > if (!-f $document_root$fastcgi_script_name) { > >> > > > set $fastcgi_script_name_custom "/cms/index.php"; > >> > > > } > >> > > > >> > > I suspect that it should be possible to do what you want to do > there, > >> > > with a "try_files". But I do not know the details. > >> > > >> > There is a CMS engine which will intercept all unmatched requests and > >> > check > >> > the database to see if there is an article with that URI. Some times > it > >> > has > >> > to match existing directories without index.php. If I use try_files, > it > >> > will either lead to a 403 error (if no index is specified), or would > >> > internally redirect the request to the index file (if it is > specified), > >> > leading to 404 error. The if condition correctly handles all the > >> > non-existing files. > >> > >> There is more than one possible try_files configuration; but that does > not > >> matter: if you have a system that works for you, you can keep using it. > >> > >> Good luck with it, > >> > >> f > >> -- > >> Francis Daly francis at daoine.org > >> > >> _______________________________________________ > >> nginx mailing list > >> nginx at nginx.org > >> http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From joyce at joycebabu.com Fri May 6 20:35:54 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Sat, 7 May 2016 02:05:54 +0530 Subject: Rewrite before regex location In-Reply-To: <20160505164424.GC9435@daoine.org> References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> <20160505164424.GC9435@daoine.org> Message-ID: > > That's because your fastcgi_split_path_info pattern does not match - > .php is not followed by / in your rewritten url. > > Because of the location{} you are in, it is probably simplest to just > replace the second capture part of that pattern with (.*)$. > Thank you Francis. It worked after following your suggestions. But I have found a different approach, that is working for all the cases that I tested. As per your suggestions, I have removed the if statement with try_files and have used the ~^ modifier to ensure that the location blocks get higher precedence than the regex block. 1. index index.php; 2. 3. location ~ /\.ht { 4. deny all; 5. } 6. 7. # .php catchall regex block 8. location ~ [^/]\.php$ { 9. # Handle .php uris, except when they are overridden below 10. include inc/php-fpm.conf; 11. } 12. 13. location / { 14. if ($request_uri ~ "^[^?]*?//") { 15. # Remove double slashes from url 16. rewrite "^" $scheme://$host$uri permanent; 17. } 18. 19. # Handle non .php files and directory indexes 20. try_files $uri $uri/index.php; 21. } 22. 23. location /generated/imgs/ { 24. # Serve file if it exists, else use rewriting (for dynamically generated and cached files) 25. # This ensures that rewrite rules are not applied to existing files 26. try_files $uri @rewrite_generated_imgs; 27. } 28. 29. location @rewrite_generated_imgs { 30. # Since extension is changed from .jpg to .php, use last. 31. # This will pass the rewritten uri to the .php regex block for execution 32. 33. rewrite "^/generated/imgs/([0-9]+)/(.*).jpg$" /generated/imgs/pic.php?width=$1&height=$1&pic=$2&mode=proportionate last; 34. rewrite "^/generated/imgs/([0-9]+)x([0-9]+)/(.*).jpg$" /generated/imgs/pic.php?width=$1&height=$2&pic=$3&mode=exact last; 35. 36. try_files $uri $uri/index.php; 37. } 38. 39. # The ^~ modifier ensures that .php uris are not caught by the .php catchall regex block 40. location ^~ /banking/ { 41. rewrite "^/banking/sitemap-(\d+).xml.gz$" /banking/sitemap-generator.php?id=$1 last; 42. 43. try_files $uri $uri/index.php; 44. 45. location ~ [^/]\.php$ { 46. rewrite "^/banking/([A-Za-z0-9-]+)/branches.php$" /banking/branches.php?name=$1 break; 47. rewrite "^/banking/([A-Za-z0-9-]+)/atm.php$" /banking/atm.php?name=$1 break; 48. 49. include inc/php-fpm.conf; 50. } 51. } 52. 53. location ^~ /news/ { 54. # Check for file before applying rewrite rules. If not, rewrite 55. # This ensures that rewrite rules are not applied to existing files 56. try_files $uri $uri/index.php @rewrite_news; 57. } 58. 59. location @rewrite_news { 60. # Rewrite non PHP urls in outer location block. 61. # Use last so that it will be passed to the .php regex block after rewrite. 62. # This ensures that the php code is executed and not downloaded as is 63. 64. rewrite "^/news/articles/?$" /news/ permanent; 65. rewrite "^/news/articles/a([0-9]+)\.html$" /news/article.php?id= $1&mode=article last; 66. rewrite "^/news/page-([0-9]+)\.html$" /news/index.php?cat= home&page=$1 last; 67. rewrite "^/news/([a-z-]+)/rss.xml$" /news/rss/$1.xml break; 68. 69. try_files $uri $uri/index.php; 70. 71. location ~ [^/]\.php$ { 72. # Rewrite PHP urls 73. # Use break so that the rewrite rules are not applied again 74. 75. rewrite "^/news/print.php$" /news/article.php?mode=print break; 76. 77. include inc/php-fpm.conf; 78. } 79. } 80. 81. ================ 82. # inc/php-fpm.conf 83. 84. # /cms/cms.php is the fallback script that will handle all missing uris 85. try_files $uri /cms/cms.php =404; 86. 87. fastcgi_pass 127.0.0.1:9000; 88. fastcgi_index index.php; 89. include fastcgi_params; Does this look fine? Is it a bad idea to use named locations like this? -------------- next part -------------- An HTML attachment was scrubbed... URL: From joyce at joycebabu.com Fri May 6 22:24:51 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Sat, 7 May 2016 03:54:51 +0530 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> <20160505164424.GC9435@daoine.org> Message-ID: It is working fine on my mac. But when I try to use it on my Cent OS server, it is failing with error nginx: [emerg] location "[^/]\.php$" cannot be inside the named location "@rewrite_news" Both are running version 1.10.0 :( -------------- next part -------------- An HTML attachment was scrubbed... URL: From joyce at joycebabu.com Fri May 6 22:27:57 2016 From: joyce at joycebabu.com (Joyce Babu) Date: Sat, 7 May 2016 03:57:57 +0530 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> <20160505164424.GC9435@daoine.org> Message-ID: Oops! The local configuration was *location ^~ @rewrite_news {...}* On Sat, May 7, 2016 at 3:54 AM, Joyce Babu wrote: > It is working fine on my mac. But when I try to use it on my Cent OS > server, it is failing with error > > nginx: [emerg] location "[^/]\.php$" cannot be inside the named location > "@rewrite_news" > > Both are running version 1.10.0 :( > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Fri May 6 22:29:07 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Fri, 6 May 2016 15:29:07 -0700 Subject: Rewrite before regex location In-Reply-To: References: <20160504220018.GY9435@daoine.org> <20160504231534.GZ9435@daoine.org> <20160505164424.GC9435@daoine.org> Message-ID: See http://nginx.org/en/docs/http/ngx_http_core_module.html#location: 'The ?@? prefix defines a named location. Such a location is not used for a regular request processing, but instead used for request redirection. They cannot be nested, and cannot contain nested locations.' On Fri, May 6, 2016 at 3:27 PM, Joyce Babu wrote: > Oops! The local configuration was *location ^~ @rewrite_news {...}* > > On Sat, May 7, 2016 at 3:54 AM, Joyce Babu wrote: > >> It is working fine on my mac. But when I try to use it on my Cent OS >> server, it is failing with error >> >> nginx: [emerg] location "[^/]\.php$" cannot be inside the named location >> "@rewrite_news" >> >> Both are running version 1.10.0 :( >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat May 7 09:04:23 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 7 May 2016 10:04:23 +0100 Subject: Nginx always searches index.php in the wrong directory In-Reply-To: References: Message-ID: <20160507090423.GF9435@daoine.org> On Fri, May 06, 2016 at 03:22:25PM -0400, Issam2204 wrote: Hi there, when I use your configuration and make some guesses about the content and requests, it seems to work for me. That is: if the directory /var/www/html/chatserver does not exist, then /var/www/html/index.php is processed; otherwise /var/www/html/chatserver/index.php is processed. > When I try to access http:// MYSERVER.net/chatserver I always have an error > because Nginx tries to search for the index.php in the root web folder > instead of the actual subfolder. What output do you get from curl -v http://MYSERVER.net/chatserver ? I expect a http 301 with a Location: of https://MYSERVER.net/chatserver Do you get something else? Then do another "curl -v" with whatever Location: you got. Eventually you will get one url which shows the problem that you report. What is that one url? With that specific url, it should be possible to work out which location{} you have told your nginx to use to handle it. > rewrite ^(.*\.php)(/)(.*)$ $1?file=/$3 last; > location / { > location ^~ /.well-known/ { > location ~ [^/]\.php(/|$) { And from there, it may be more obvious where the problem that can be solved is. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun May 8 23:04:50 2016 From: nginx-forum at forum.nginx.org (Babaev) Date: Sun, 08 May 2016 19:04:50 -0400 Subject: =?UTF-8?B?0J/QvtC00LTQtdGA0LbQuNCy0LDQtdGCINC70LggTmdpbngg0L/RgNC+0LfRgNCw?= =?UTF-8?B?0YfQvdGL0Lkg0L/RgNC+0LrRgdC4Pw==?= Message-ID: <16fd80efd2aed19711e841f3ff0d03a3.NginxMailingListEnglish@forum.nginx.org> ???????????? ?? Nginx ?????????? ??????? ??????? ????? ????? ?????: - ????????????? ?????? ?? HTTP ?????????, ???????? ????? ?????/ - Nginx ?????????? ????? ???? ???? ?????? - ?????????? ????????????? ??????(???-????) ???????! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266706,266706#msg-266706 From nginx-forum at forum.nginx.org Mon May 9 08:07:23 2016 From: nginx-forum at forum.nginx.org (Nedeos) Date: Mon, 09 May 2016 04:07:23 -0400 Subject: nginx 1.11.0: client sent stream with data before settings were acknowledged while processing HTTP/2 connection In-Reply-To: <4bccd7db37090aa1eb89813d5c1181e7.NginxMailingListEnglish@forum.nginx.org> References: <4bccd7db37090aa1eb89813d5c1181e7.NginxMailingListEnglish@forum.nginx.org> Message-ID: This also happens with okhttp 3.2.0 (Android). Last working nginx version is 1.9.13, There I use basic authorization and add an Auth header with a network interceptor with credentials. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266525,266707#msg-266707 From pankajitbhu at gmail.com Mon May 9 09:04:13 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Mon, 9 May 2016 14:34:13 +0530 Subject: (52) Empty reply from server In-Reply-To: <3210444.EYQ3JPFaF7@vbart-workstation> References: <1927484.Lg9YNvfiLY@vbart-workstation> <3210444.EYQ3JPFaF7@vbart-workstation> Message-ID: Hi, Its means nginx do not have any API as other server having to set header and get header ? very strange ... I need to write my own module to read and write cookies values? My module is in written in C programming language. On Thu, May 5, 2016 at 7:32 PM, Valentin V. Bartenev wrote: > On Thursday 05 May 2016 10:27:46 Pankaj Chaudhary wrote: > > Hi, > > > > thank you! > > My module is basically for resource protection. > > I have already running my module on other servers also. > > My module follow below steps. > > -Generate cookie and write in response header > > -read from header when cookie is needed > > > > IIS have API setheader() to set header and getheader() to get header. > > Apache have apr_table_get() to get value from header and apr_table_set() > to > > set the value in header. > > > > Do we have similar APIs in nginx? > > > > How i can achieve same kind of behavior in nginx? > > > [..] > > If you want to set a response header, then should look to the > source of ngx_http_headers_module, which is responsible for > adding headers to output. > > For reading cookies you can look to the $http_cookie variable > implementation. > > Please, use "grep" for search and read the code. > > You can implement your module a way easier if you just provide > a directive, that can use the variable to read some value, and > adds a variable to set the value. > > Then it can be used like this: > > protection_value $cookie_key; > > add_header Set-Cookie "key=$protection_value"; > > You can also check the ngx_http_secure_link_module: > http://nginx.org/en/docs/http/ngx_http_secure_link_module.html > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 9 09:15:58 2016 From: nginx-forum at forum.nginx.org (locojohn) Date: Mon, 09 May 2016 05:15:58 -0400 Subject: Server setup consultant In-Reply-To: References: Message-ID: <3c6edb1e2df96abd9c694c07d8b89bb4.NginxMailingListEnglish@forum.nginx.org> Hello Giulio, I can help you out with nginx configuration, optimisation and security auditing. I was setting up nginx-based servers and converting from Apache configurations since 2009. I also have broad expertise with different linux distributions. If interested, mail me at loco at andrews.lv. Andrejs Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266325,266709#msg-266709 From medvedev.yp at gmail.com Mon May 9 09:40:19 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Mon, 09 May 2016 12:40:19 +0300 Subject: =?UTF-8?B?UmU6INCf0L7QtNC00LXRgNC20LjQstCw0LXRgiDQu9C4IE5naW54INC/0YDQvtC3?= =?UTF-8?B?0YDQsNGH0L3Ri9C5INC/0YDQvtC60YHQuD8=?= References: <16fd80efd2aed19711e841f3ff0d03a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: ??????? ?????, ??? ????? ????????? nginx ??? reverse proxy ?????????? ? ????? ASUS -------- ???????? ????????? -------- ???????????:Babaev ????????????:Mon, 09 May 2016 02:04:50 +0300 ??????????:nginx at nginx.org ????:???????????? ?? Nginx ?????????? ??????? >???????????? ?? Nginx ?????????? ??????? ??????? ????? ????? ?????: > >- ????????????? ?????? ?? HTTP ?????????, ???????? ????? ?????/ >- Nginx ?????????? ????? ???? ???? ?????? >- ?????????? ????????????? ??????(???-????) > >???????! > >Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266706,266706#msg-266706 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From reallfqq-nginx at yahoo.fr Mon May 9 09:50:12 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Mon, 9 May 2016 11:50:12 +0200 Subject: =?UTF-8?B?UmU6INCf0L7QtNC00LXRgNC20LjQstCw0LXRgiDQu9C4IE5naW54INC/0YDQvtC3?= =?UTF-8?B?0YDQsNGH0L3Ri9C5INC/0YDQvtC60YHQuD8=?= In-Reply-To: References: <16fd80efd2aed19711e841f3ff0d03a3.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, This Mailing List is intended to be written to in english. You have a russian ML available dedicated for its use: http://nginx.org/en/support.html ?Have a nice day,? --- *B. R.* 2016-05-09 11:40 GMT+02:00 Yuriy Medvedev : > ??????? ?????, ??? ????? ????????? nginx ??? reverse proxy > > ?????????? ? ????? ASUS > > -------- ???????? ????????? -------- > ???????????:Babaev > ????????????:Mon, 09 May 2016 02:04:50 +0300 > ??????????:nginx at nginx.org > ????:???????????? ?? Nginx ?????????? ??????? > > ???????????? ?? Nginx ?????????? ??????? ??????? ????? ????? ?????: > > - ????????????? ?????? ?? HTTP ?????????, ???????? ????? ?????/ > - Nginx ?????????? ????? ???? ???? ?????? > - ?????????? ????????????? ??????(???-????) > > ???????! > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,266706,266706#msg-266706 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Mon May 9 11:53:07 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Mon, 09 May 2016 14:53:07 +0300 Subject: Server setup consultant References: <3c6edb1e2df96abd9c694c07d8b89bb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: You can write to me. My Skype kenny-opennix ?????????? ? ????? ASUS -------- ???????? ????????? -------- ???????????:locojohn ????????????:Mon, 09 May 2016 12:15:58 +0300 ??????????:nginx at nginx.org ????:Re: Server setup consultant >Hello Giulio, > >I can help you out with nginx configuration, optimisation and security >auditing. I was setting up nginx-based servers and converting from Apache >configurations since 2009. I also have broad expertise with different linux >distributions. > >If interested, mail me at loco at andrews.lv. > >Andrejs > >Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266325,266709#msg-266709 > >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 9 12:39:31 2016 From: nginx-forum at forum.nginx.org (yogeshorai) Date: Mon, 09 May 2016 08:39:31 -0400 Subject: =?UTF-8?Q?Range_request_with_Accept-Encoding=3A=E2=80=9Cgzip=2Cdeflate=22_?= =?UTF-8?Q?for_*=2Epdf=2E*_file-name_pattern_returns_416?= Message-ID: <0012ae3abde6d6952c4ddcf684c43d49.NginxMailingListEnglish@forum.nginx.org> We are facing this weird behavior wherein if a range request with Accept-Encoding:?gzip,deflate" is called for a url having .pdf as part of path, nginx would throw 416 even if the range is well within overall size of source file Also this happens for Range request "Range: bytes=20-" and above while same would work if its less that 20 bytes as start of range eg "Range: bytes=19-" Nginx Version : 1.8.0 Sample curl curl -v --user "xxx" -H "Host:yyy.com" --compressed -H "Range: bytes=20-" https://yyy.com/api/v1/fs-content/Shared/0test/latest.pdf.test.txt Response : > GET /api/v1/fs-content/Shared/0test/latest.pdf.test.txt HTTP/1.1 > User-Agent: curl/7.35.0 > Accept: */* > Accept-Encoding: deflate, gzip > Range: bytes=20- > < HTTP/1.1 416 Requested Range Not Satisfiable < Date: Mon, 09 May 2016 12:34:03 GMT < Content-Type: text/html; charset=iso-8859-1 < Transfer-Encoding: chunked < X-Content-Type-Options: nosniff < X-XSS-Protection: 1; mode=block < Strict-Transport-Security: max-age=31536000 < 200 OK

OK

None of the range-specifier values in the Range request-header field overlap the current extent of the selected resource.

Sample access log 172.26.7.15 - [08/May/2016:23:03:20 -0700] "GET /public-api/v1/fs-content-download/Shared/0test/latest.pdf.test.txt HTTP/1.1" 416 466 "qaaa.egnyte.com" "-" "curl/7.19.7 (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 libssh2/1.4.2" "deflate, gzip" "D0536872:4496_0A19800C:01BB_--_3B70C9" "-:bytes=20-" "" "-" "text/html; charset=iso-8859-1" "" "" "" "" "" "" "0.023" "0.023" "BYPASS" Note : Earlier we had an issue with If-Range header value format to match with Etag value and had to add a patch mentioned here http://permalink.gmane.org/gmane.comp.web.nginx.devel/2579 Kindly suggest. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266713,266713#msg-266713 From jiangmuhui at gmail.com Mon May 9 13:27:31 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Mon, 9 May 2016 21:27:31 +0800 Subject: HTTP2 window update and priority Message-ID: Hi According to my view on the source code of Nginx 1.9.15, I noticed some observation below: 1. when there is no enough window, nginx won't send back the response frame, though according to the log, nginx said it has sent the headers frame out, but client won't receive it. This is not right according to RFC 7540 because window update will only limit the data frames 2.when there is no enough window, nginx will first store the request in the queue, when it received a window update, nginx will start fetch the streams stored in the queue use ngx_queue_head within a while loop(line 2161 in ngx_http_v2.c). I noticed the stream order I got is the same as the sending order with no relationship of the dependency tree, then nginx will create the corresponding data frames according to the stream order. But I noticed whether nginx will put the frame into the output queue depends on the value of wev->ready, here a problem is that I noticed when the request files is big, the wev->ready is set 0, only when this happens, the priority mechanism can work otherwise the frame will be sent out directly with no dependency observation. I am still confused on how does nginx implement the priority. Could you please give me some suggestions. A failed priority mechanism example debug log: https://github.com/valour01/LOG/blob/master/example.log Best Regards Muhui Jiang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Mon May 9 21:52:47 2016 From: ahall at autodist.com (Alex Hall) Date: Mon, 9 May 2016 17:52:47 -0400 Subject: php5-fpm gives unknown script error Message-ID: Hello list, I'm pretty new to Nginx, but I wanted to give it a shot. I like how it is configured--it makes more sense to me than Apache. However, I'm having a hard time with getting php. I'm running on Debian 8, and Nginx was installed with apt-get. I've installed php5-fmp, and can view php files in a browser. The problem is that, during the installation of OSTicket, I got an error. I looked at Nginx's log, and found a FastCGI error: FastCGI sent an stderr: unknown script while reading response header from upstream >From what I've dug up thus far, this indicates a failure to resolve the path to a php script. My suspicion is that OSTicket is calling a script in a folder somewhere, and that path isn't resolving. My location for php files looks like this: server { #all the usual, much (but none of the SSL stuff) copied from # https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ location ~ \.php$ { fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; fastcgi_param PATH_INFO $path_info; fastcgi_pass unix:///tmp/php5-fpm.sock; } } I don't claim to understand what all this stuff is doing, but I hoped this might work. I know OSTicket doesn't officially support Nginx yet, but since other people are having this problem for all kinds of scripts and situations, I thought maybe it was a more general problem I'd be able to fix. If anyone has suggestions, please let me know. If you need to see more configuration files, I can try to do that. The setup here is awkward (SSH from Windows to Debian, with no SCP support currently and no way to copy from or to the SSH session). I can copy files, but it's a bit of a process. Anyway, thanks in advance for any ideas. -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 10 08:55:30 2016 From: nginx-forum at forum.nginx.org (locojohn) Date: Tue, 10 May 2016 04:55:30 -0400 Subject: nginx 1.11.0: client sent stream with data before settings were acknowledged while processing HTTP/2 connection In-Reply-To: <7340515.Y7bN5ZZOCp@vbart-laptop> References: <7340515.Y7bN5ZZOCp@vbart-laptop> Message-ID: <572bc701476205d3c4935eaa3bc4d6a8.NginxMailingListEnglish@forum.nginx.org> Valentin V. Bartenev Wrote: ------------------------------------------------------- > This issue should be reported to Safari. It appears that it doesn't > handle refused streams. > > I'm going to make a workaround, but it will take time. Just to clarify: this issue also affects Firefox 45. I suppose it's nginx bug, because we have never experienced it before the update. Andrejs Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266525,266725#msg-266725 From nginx at netdirect.fr Tue May 10 10:26:59 2016 From: nginx at netdirect.fr (Artur) Date: Tue, 10 May 2016 12:26:59 +0200 Subject: Nginx as reverse proxy scalability Message-ID: Hello, I'm currently working on nginx limits as a reverse proxy on a Debian box. My current setup is a nginx configured as a http/https web server (for static content) and a reverse proxy for node.js processes on the same server and in future on other Debian boxes. I was unable to see (while reading the documentation) real hard limitations for nginx in this setup excepting ephemeral ports exhaustion. It may be a concern as Node.js applications usually open a websocket that is connected as long as a user stays connected to the application. If I understood everything correctly it means that nginx in this setup will not be able to manage more than about 64k clients connections. Am I right ? What can be done if I would like to go over this 64k limit ? Could you please suggest a solution ? -- Best regards, Artur. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 10 12:21:44 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 10 May 2016 13:21:44 +0100 Subject: php5-fpm gives unknown script error In-Reply-To: References: Message-ID: <20160510122144.GJ9435@daoine.org> On Mon, May 09, 2016 at 05:52:47PM -0400, Alex Hall wrote: Hi there, without knowing anything about OSTicket... > FastCGI sent an stderr: unknown script while reading response header from > upstream That usually means that the fastcgi server could not find the file that it was looking for, based on what nginx told it. So: what http request do you make? What (php) file do you want the fastcgi server to use for this request? > The setup here is awkward (SSH > from Windows to Debian, with no SCP support currently and no way to copy > from or to the SSH session). I can copy files, but it's a bit of a process. As an aside: if you can copy/paste words between your ssh session and your windows command prompt, you can use "tar" and "base64", possibly also with "gzip", to copy files from one to the other. It can be annoying for large files, but anything that compresses down to some tens of kB is usually quite quick and easy. Cheers, f -- Francis Daly francis at daoine.org From vbart at nginx.com Tue May 10 12:49:22 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 10 May 2016 15:49:22 +0300 Subject: nginx 1.11.0: client sent stream with data before settings were acknowledged while processing HTTP/2 connection In-Reply-To: <572bc701476205d3c4935eaa3bc4d6a8.NginxMailingListEnglish@forum.nginx.org> References: <7340515.Y7bN5ZZOCp@vbart-laptop> <572bc701476205d3c4935eaa3bc4d6a8.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2976654.TLdQqyrfXd@vbart-workstation> On Tuesday 10 May 2016 04:55:30 locojohn wrote: > Valentin V. Bartenev Wrote: > ------------------------------------------------------- > > > This issue should be reported to Safari. It appears that it doesn't > > handle refused streams. > > > > I'm going to make a workaround, but it will take time. > > Just to clarify: this issue also affects Firefox 45. I suppose it's nginx > bug, because we have never experienced it before the update. > [..] Are you sure about Firefox 45? There is a ticket about Firefox 46: https://bugzilla.mozilla.org/show_bug.cgi?id=1268775 wbr, Valentin V. Bartenev From vbart at nginx.com Tue May 10 13:15:02 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 10 May 2016 16:15:02 +0300 Subject: HTTP2 window update and priority In-Reply-To: References: Message-ID: <2900116.NcWKNIfO8I@vbart-workstation> On Monday 09 May 2016 21:27:31 Muhui Jiang wrote: > Hi > > According to my view on the source code of Nginx 1.9.15, I noticed some > observation below: > > 1. when there is no enough window, nginx won't send back the response > frame, though according to the log, nginx said it has sent the headers > frame out, but client won't receive it. This is not right according to RFC > 7540 because window update will only limit the data frames > You've misread the source code. HTTP/2 windows are handled in ngx_http_v2_send_chain(), which is only called for response body data. > 2.when there is no enough window, nginx will first store the request in the > queue, when it received a window update, nginx will start fetch the streams > stored in the queue use ngx_queue_head within a while loop(line 2161 in > ngx_http_v2.c). I noticed the stream order I got is the same as the sending > order with no relationship of the dependency tree, then nginx will create > the corresponding data frames according to the stream order. But I noticed > whether nginx will put the frame into the output queue depends on the > value of wev->ready, here a problem is that I noticed when the request > files is big, the wev->ready is set 0, only when this happens, the priority > mechanism can work otherwise the frame will be sent out directly with no > dependency observation. I am still confused on how does nginx implement the > priority. Could you please give me some suggestions. > > A failed priority mechanism example debug log: > https://github.com/valour01/LOG/blob/master/example.log The prioritization has effect when there is concurrency for connection. In your case from nginx point of view there was only one stream in processing at any moment. The previous stream was completed before the new one has been received. wbr, Valentin V. Bartenev From ahall at autodist.com Tue May 10 13:21:22 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 10 May 2016 09:21:22 -0400 Subject: php5-fpm gives unknown script error In-Reply-To: <20160510122144.GJ9435@daoine.org> References: <20160510122144.GJ9435@daoine.org> Message-ID: On Tue, May 10, 2016 at 8:21 AM, Francis Daly wrote: > On Mon, May 09, 2016 at 05:52:47PM -0400, Alex Hall wrote: > > Hi there, > > without knowing anything about OSTicket... > > > FastCGI sent an stderr: unknown script while reading response header from > > upstream > > That usually means that the fastcgi server could not find the file that > it was looking for, based on what nginx told it. > > So: what http request do you make? > > What (php) file do you want the fastcgi server to use for this request? > Looks like the request is for js/jquery.[number].js, ROOT/PATHcss/flags.css, and other files. I'm not sure which php file it is, since OSTicket is running its own installer. Well, the main file is install.php according to the "referrer" of the request, but I don't know if that's the whole story. > > > The setup here is awkward (SSH > > from Windows to Debian, with no SCP support currently and no way to copy > > from or to the SSH session). I can copy files, but it's a bit of a > process. > > As an aside: if you can copy/paste words between your ssh session and > your windows command prompt, you can use "tar" and "base64", possibly > also with "gzip", to copy files from one to the other. > > It can be annoying for large files, but anything that compresses down > to some tens of kB is usually quite quick and easy. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Tue May 10 13:42:26 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 10 May 2016 09:42:26 -0400 Subject: Global denial for certain IPs or agents? Message-ID: Hi all, I've got Nginx on a Debian server, hosting two sites (two subdomains of my work's website). I want to limit both, and any future subdomains, to only intranet addresses. I also saw access logs this morning from a Chinese web spider which I want to block. I know how to do this, but how can I do it globally? Currently I have to put the rules in each site's configuration file, which is duplicating, which I'd like to avoid. I tried adding this to the main conf file, but I'm not sure what to put for the "listen" and other variables given that this isn't a server, it's a rule I want applied to all servers. Is this doable? If so, what's the process? Thanks. -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 10 13:46:50 2016 From: nginx-forum at forum.nginx.org (mex) Date: Tue, 10 May 2016 09:46:50 -0400 Subject: Global denial for certain IPs or agents? In-Reply-To: References: Message-ID: Hi Alex this might be an inspiration for your task: https://www.howtoforge.com/nginx-how-to-block-visitors-by-country-with-the-geoip-module-debian-ubuntu cheers, mex Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266738,266739#msg-266739 From ahall at autodist.com Tue May 10 14:03:06 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 10 May 2016 10:03:06 -0400 Subject: Global denial for certain IPs or agents? In-Reply-To: References: Message-ID: Thanks. That page says that, to do the actual returning of the 4xx error, you must go go your site's configuration, not the global conf file. Am I reading that right? Is the easiest way to set my own variable in the main conf file, based on IP, then just do a check for that variable in each site's file? Or is there another way? On Tue, May 10, 2016 at 9:46 AM, mex wrote: > Hi Alex > > this might be an inspiration for your task: > > https://www.howtoforge.com/nginx-how-to-block-visitors-by-country-with-the-geoip-module-debian-ubuntu > > > > cheers, > > > mex > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,266738,266739#msg-266739 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jiangmuhui at gmail.com Tue May 10 14:12:59 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Tue, 10 May 2016 22:12:59 +0800 Subject: HTTP2 window update and priority In-Reply-To: <2900116.NcWKNIfO8I@vbart-workstation> References: <2900116.NcWKNIfO8I@vbart-workstation> Message-ID: Hi >You've misread the source code. HTTP/2 windows are handled in >ngx_http_v2_send_chain(), which is only called for response body >data. when the window is blocked, Nginx logs that the headers frame has been sent out, however I cannot receive any response headers. The same client tried on h2o and I can receive the response headers. >The prioritization has effect when there is concurrency for >connection. In your case from nginx point of view there was >only one stream in processing at any moment. The previous >stream was completed before the new one has been received. Let me explain my strategy in detail. I used some requests(stream id 1-15) to consume the window (65535) And then block the window without sending any window update frame(I have sent a very large stream update before). Then I send the next 6 streams with different priorities. At this moment, the 6 streams should be blocked. Nginx should have enough time to handle the priority. Then I send a very large window update frame, but nginx will process the stream one by one with no priority observation. What I want to say is, only when the file is really large then the priority can be observed. But I have give enough time to Nginx by block the request. Nginx should handle the priority though the file is really small. If you need more information, please tell me. Many Thanks Best Regards Muhui Jiang 2016-05-10 21:15 GMT+08:00 Valentin V. Bartenev : > On Monday 09 May 2016 21:27:31 Muhui Jiang wrote: > > Hi > > > > According to my view on the source code of Nginx 1.9.15, I noticed some > > observation below: > > > > 1. when there is no enough window, nginx won't send back the response > > frame, though according to the log, nginx said it has sent the headers > > frame out, but client won't receive it. This is not right according to > RFC > > 7540 because window update will only limit the data frames > > > > You've misread the source code. HTTP/2 windows are handled in > ngx_http_v2_send_chain(), which is only called for response body > data. > > > > > 2.when there is no enough window, nginx will first store the request in > the > > queue, when it received a window update, nginx will start fetch the > streams > > stored in the queue use ngx_queue_head within a while loop(line 2161 in > > ngx_http_v2.c). I noticed the stream order I got is the same as the > sending > > order with no relationship of the dependency tree, then nginx will create > > the corresponding data frames according to the stream order. But I > noticed > > whether nginx will put the frame into the output queue depends on the > > value of wev->ready, here a problem is that I noticed when the request > > files is big, the wev->ready is set 0, only when this happens, the > priority > > mechanism can work otherwise the frame will be sent out directly with no > > dependency observation. I am still confused on how does nginx implement > the > > priority. Could you please give me some suggestions. > > > > A failed priority mechanism example debug log: > > https://github.com/valour01/LOG/blob/master/example.log > > The prioritization has effect when there is concurrency for > connection. In your case from nginx point of view there was > only one stream in processing at any moment. The previous > stream was completed before the new one has been received. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 10 14:33:41 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 10 May 2016 17:33:41 +0300 Subject: HTTP2 window update and priority In-Reply-To: References: <2900116.NcWKNIfO8I@vbart-workstation> Message-ID: <2934956.PQf68C5OW6@vbart-workstation> On Tuesday 10 May 2016 22:12:59 Muhui Jiang wrote: > Hi > > >You've misread the source code. HTTP/2 windows are handled in > >ngx_http_v2_send_chain(), which is only called for response body > >data. > > when the window is blocked, Nginx logs that the headers frame has been sent > out, however I cannot receive any response headers. The same client tried > on h2o and I can receive the response headers. > In addition, nginx uses SSL buffer to optimize record sizes. You can tune it using the "ssl_buffer_size" directive: http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size > >The prioritization has effect when there is concurrency for > >connection. In your case from nginx point of view there was > >only one stream in processing at any moment. The previous > >stream was completed before the new one has been received. > > Let me explain my strategy in detail. I used some requests(stream id 1-15) > to consume the window (65535) And then block the window without sending any > window update frame(I have sent a very large stream update before). Then I > send the next 6 streams with different priorities. At this moment, the 6 > streams should be blocked. Nginx should have enough time to handle the > priority. Then I send a very large window update frame, but nginx will > process the stream one by one with no priority observation. What I want to > say is, only when the file is really large then the priority can be > observed. But I have give enough time to Nginx by block the request. Nginx > should handle the priority though the file is really small. If you need > more information, please tell me. Many Thanks > [..] Ok, now I understand your case. You are right, currently in this particular case priorities aren't handled. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue May 10 14:41:05 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 May 2016 17:41:05 +0300 Subject: =?UTF-8?Q?Re=3A_Range_request_with_Accept-Encoding=3A=E2=80=9Cgzip=2Cdefla?= =?UTF-8?Q?te=22_for_*=2Epdf=2E*_file-name_pattern_returns_416?= In-Reply-To: <0012ae3abde6d6952c4ddcf684c43d49.NginxMailingListEnglish@forum.nginx.org> References: <0012ae3abde6d6952c4ddcf684c43d49.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160510144105.GI36620@mdounin.ru> Hello! On Mon, May 09, 2016 at 08:39:31AM -0400, yogeshorai wrote: > We are facing this weird behavior wherein if a range request with > Accept-Encoding:?gzip,deflate" is called for a url having .pdf as part of > path, nginx would throw 416 even if the range is well within overall size of > source file > > Also this happens for Range request "Range: bytes=20-" and above while same > would work if its less that 20 bytes as start of range eg "Range: > bytes=19-" > > Nginx Version : 1.8.0 > > Sample curl > > curl -v --user "xxx" -H "Host:yyy.com" --compressed -H "Range: bytes=20-" > https://yyy.com/api/v1/fs-content/Shared/0test/latest.pdf.test.txt > > Response : > > GET /api/v1/fs-content/Shared/0test/latest.pdf.test.txt HTTP/1.1 > > User-Agent: curl/7.35.0 > > Accept: */* > > Accept-Encoding: deflate, gzip > > Range: bytes=20- > > > < HTTP/1.1 416 Requested Range Not Satisfiable > < Date: Mon, 09 May 2016 12:34:03 GMT > < Content-Type: text/html; charset=iso-8859-1 > < Transfer-Encoding: chunked > < X-Content-Type-Options: nosniff > < X-XSS-Protection: 1; mode=block > < Strict-Transport-Security: max-age=31536000 > < > > > 200 OK > >

OK

>

None of the range-specifier values in the Range > request-header field overlap the current extent > of the selected resource.

> > > > Sample access log > > 172.26.7.15 - [08/May/2016:23:03:20 -0700] "GET > /public-api/v1/fs-content-download/Shared/0test/latest.pdf.test.txt > HTTP/1.1" 416 466 "qaaa.egnyte.com" "-" "curl/7.19.7 > (x86_64-redhat-linux-gnu) libcurl/7.19.7 NSS/3.14.0.0 zlib/1.2.3 libidn/1.18 > libssh2/1.4.2" "deflate, gzip" "D0536872:4496_0A19800C:01BB_--_3B70C9" > "-:bytes=20-" "" "-" "text/html; charset=iso-8859-1" "" "" "" "" "" "" > "0.023" "0.023" "BYPASS" The response is returned by your backend, not by nginx. Try looking at your backend instead. > Note : > Earlier we had an issue with If-Range header value format to match with Etag > value and had to add a patch mentioned here > http://permalink.gmane.org/gmane.comp.web.nginx.devel/2579 The patch in question is only needed if your backend is broken and doesn't follow HTTP standard. You may consider fixing your backend instead. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue May 10 14:44:21 2016 From: nginx-forum at forum.nginx.org (mex) Date: Tue, 10 May 2016 10:44:21 -0400 Subject: Global denial for certain IPs or agents? In-Reply-To: References: Message-ID: Hi Alex, you can do it that way or use something like this inside your server {} block: allow IP1; allow IP2; allow IP3; deny all; http://nginx.org/en/docs/http/ngx_http_access_module.html#allow Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266738,266750#msg-266750 From mdounin at mdounin.ru Tue May 10 15:04:13 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 May 2016 18:04:13 +0300 Subject: Nginx as reverse proxy scalability In-Reply-To: References: Message-ID: <20160510150413.GJ36620@mdounin.ru> Hello! On Tue, May 10, 2016 at 12:26:59PM +0200, Artur wrote: > I'm currently working on nginx limits as a reverse proxy on a Debian box. > My current setup is a nginx configured as a http/https web server (for > static content) and a reverse proxy for node.js processes on the same > server and in future on other Debian boxes. > I was unable to see (while reading the documentation) real hard > limitations for nginx in this setup excepting ephemeral ports exhaustion. > It may be a concern as Node.js applications usually open a websocket > that is connected as long as a user stays connected to the application. > If I understood everything correctly it means that nginx in this setup > will not be able to manage more than about 64k clients connections. > Am I right ? What can be done if I would like to go over this 64k limit > ? Could you please suggest a solution ? As long as you are using TCP/IP and have only one backend (ip + port), and only one local address on nginx side, then you are limited by the number of local ports nginx can use. Theoretical limit is 64k, practical one is usually smaller - on Linux systems it depends on net.ipv4.ip_local_port_range. Most natural solution is to add more backends. Under normal conditions you will add more backend servers as your system will grow, so you'll never hit the problem in the first place. If you need to handle more than 64k connections to a single backend server, consider listening on multiple addresses in your backend application (e.g., listening on multiple ports). Other available solutions are: - use UNIX domain sockets (this works when you have everything on a single host); - add more local addresses on nginx side and use proxy_bind to balance users between these addresses. -- Maxim Dounin http://nginx.org/ From jiangmuhui at gmail.com Tue May 10 15:11:42 2016 From: jiangmuhui at gmail.com (Muhui Jiang) Date: Tue, 10 May 2016 23:11:42 +0800 Subject: HTTP2 window update and priority In-Reply-To: <2934956.PQf68C5OW6@vbart-workstation> References: <2900116.NcWKNIfO8I@vbart-workstation> <2934956.PQf68C5OW6@vbart-workstation> Message-ID: Hi To say more, I may need your comment on my understanding of Nginx 1. Is the reason that I cannot receive the response headers frame because the response headers are stored in the SSL buffer? 2.Nginx will fetch the blocked request in the queue when receiving a window update frame (is the request order in the queue the same as my sending order?) 3.Then Nginx will handle the request one by one in a non-block mode. I noticed that when the requested file is small, nginx will create all the data frames, push the data frames into out frames queue and sent it out. ( the handling order is the same as my sending request order) When the requested file is large(say 10Mb), nginx will create only a part of data frames and then create some other data frames (the handling order is related to the dependency tree). So: - when a file is large, how many data frames to be generated once before generating the other frames, is it related to the SSL buffer size and the maximum frame size? and when Nginx are going to send out the frames and start a new SSL buffer. - I dig the debug log and noticed the real process to reflect the priority handling is the order to generate the data frames and connect those data frames before sending then out. I cannot find the handling process related to priority when generating the data frames and connect them together. It would be great if you can tell me the strategy or locate the source code. - Like the case I mentioned above. when the requested file is small, clients can not get the expected receiving response order. How large a file should be then nginx can handle them concurrently. I think in the real environment, many js , css file are not so big. So Nginx can not give priority mechanism in those cases. I suggest that Nginx should change the request queue order(Here I noticed the request queue is the same as the receiving order) rather than handling priority when generating data frames, it would be too late. So that even the files is small, the response order will also follows the dependency tree. Best Regards Muhui Jiang 2016-05-10 22:33 GMT+08:00 Valentin V. Bartenev : > On Tuesday 10 May 2016 22:12:59 Muhui Jiang wrote: > > Hi > > > > >You've misread the source code. HTTP/2 windows are handled in > > >ngx_http_v2_send_chain(), which is only called for response body > > >data. > > > > when the window is blocked, Nginx logs that the headers frame has been > sent > > out, however I cannot receive any response headers. The same client tried > > on h2o and I can receive the response headers. > > > > In addition, nginx uses SSL buffer to optimize record sizes. > > You can tune it using the "ssl_buffer_size" directive: > http://nginx.org/en/docs/http/ngx_http_ssl_module.html#ssl_buffer_size > > > > >The prioritization has effect when there is concurrency for > > >connection. In your case from nginx point of view there was > > >only one stream in processing at any moment. The previous > > >stream was completed before the new one has been received. > > > > Let me explain my strategy in detail. I used some requests(stream id > 1-15) > > to consume the window (65535) And then block the window without sending > any > > window update frame(I have sent a very large stream update before). Then > I > > send the next 6 streams with different priorities. At this moment, the 6 > > streams should be blocked. Nginx should have enough time to handle the > > priority. Then I send a very large window update frame, but nginx will > > process the stream one by one with no priority observation. What I want > to > > say is, only when the file is really large then the priority can be > > observed. But I have give enough time to Nginx by block the request. > Nginx > > should handle the priority though the file is really small. If you need > > more information, please tell me. Many Thanks > > > [..] > > Ok, now I understand your case. You are right, currently in this > particular case priorities aren't handled. > > wbr, Valentin V. Bartenev > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From djczaski at gmail.com Tue May 10 15:13:10 2016 From: djczaski at gmail.com (Danomi Czaski) Date: Tue, 10 May 2016 11:13:10 -0400 Subject: SO_BINDTODEVICE Message-ID: My device has multiple interfaces and supports dynamic ips. SO_BINDTODEVICE looks like it would be used to specify a device in the listen statement instead of having to update every IP change. I see someone submitted a patch years ago that wasn't accepted and there was no follow on. Is there any particular reason other than this probably isn't a common use case? Is this something that could be added in the future? https://forum.nginx.org/read.php?29,234862 From mellon at fugue.com Tue May 10 15:16:02 2016 From: mellon at fugue.com (Ted Lemon) Date: Tue, 10 May 2016 11:16:02 -0400 Subject: SO_BINDTODEVICE In-Reply-To: References: Message-ID: SO_BINDTODEVICE was originally put in to allow the DHCP server to tell on which interface a packet had arrived. I don't see any reason why it couldn't be used the way you describe, but I am under the impression that it is somewhat deprecated, and that may be why there was no action on that patch. Your best bet would be to ask the Linux network folks. On Tue, May 10, 2016 at 11:13 AM, Danomi Czaski wrote: > My device has multiple interfaces and supports dynamic ips. > SO_BINDTODEVICE looks like it would be used to specify a device in the > listen statement instead of having to update every IP change. > > I see someone submitted a patch years ago that wasn't accepted and > there was no follow on. Is there any particular reason other than this > probably isn't a common use case? Is this something that could be > added in the future? > > https://forum.nginx.org/read.php?29,234862 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maxim at nginx.com Tue May 10 15:27:09 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 10 May 2016 18:27:09 +0300 Subject: Nginx as reverse proxy scalability In-Reply-To: <20160510150413.GJ36620@mdounin.ru> References: <20160510150413.GJ36620@mdounin.ru> Message-ID: <19fe1c1d-5dff-7062-4b61-ab37c3a7f153@nginx.com> On 5/10/16 6:04 PM, Maxim Dounin wrote: > Hello! > > On Tue, May 10, 2016 at 12:26:59PM +0200, Artur wrote: > >> I'm currently working on nginx limits as a reverse proxy on a Debian box. >> My current setup is a nginx configured as a http/https web server (for >> static content) and a reverse proxy for node.js processes on the same >> server and in future on other Debian boxes. >> I was unable to see (while reading the documentation) real hard >> limitations for nginx in this setup excepting ephemeral ports exhaustion. >> It may be a concern as Node.js applications usually open a websocket >> that is connected as long as a user stays connected to the application. >> If I understood everything correctly it means that nginx in this setup >> will not be able to manage more than about 64k clients connections. >> Am I right ? What can be done if I would like to go over this 64k limit >> ? Could you please suggest a solution ? > > As long as you are using TCP/IP and have only one backend (ip + > port), and only one local address on nginx side, then you are > limited by the number of local ports nginx can use. Theoretical > limit is 64k, practical one is usually smaller - on Linux systems > it depends on net.ipv4.ip_local_port_range. > > Most natural solution is to add more backends. Under normal > conditions you will add more backend servers as your system will > grow, so you'll never hit the problem in the first place. If you > need to handle more than 64k connections to a single backend > server, consider listening on multiple addresses in your backend > application (e.g., listening on multiple ports). > > Other available solutions are: > > - use UNIX domain sockets (this works when you have everything on > a single host); > > - add more local addresses on nginx side and use proxy_bind to > balance users between these addresses. > + https://www.nginx.com/blog/overcoming-ephemeral-port-exhaustion-nginx-plus/ -- Maxim Konovalov From nginx at netdirect.fr Tue May 10 15:37:55 2016 From: nginx at netdirect.fr (Artur) Date: Tue, 10 May 2016 17:37:55 +0200 Subject: Nginx as reverse proxy scalability In-Reply-To: <20160510150413.GJ36620@mdounin.ru> References: <20160510150413.GJ36620@mdounin.ru> Message-ID: Thanks for your answer. Le 10/05/2016 ? 17:04, Maxim Dounin a ?crit : > As long as you are using TCP/IP and have only one backend (ip + > port), and only one local address on nginx side, then you are > limited by the number of local ports nginx can use. I currently have nginx running on the same host that my backends. I have 4 of them listening on different ports on 127.0.0.1. In this situation may I expect 4 times 65000 simultaneous connections ? > - add more local addresses on nginx side and use proxy_bind to > balance users between these addresses. Yes, I've seen this, however I didn't catch how to dynamically assign a value to proxy_bind from a pool of IP addresses in nginx (not Nginx Plus). -- Best regards, Artur. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vbart at nginx.com Tue May 10 15:43:04 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 10 May 2016 18:43:04 +0300 Subject: Nginx as reverse proxy scalability In-Reply-To: References: <20160510150413.GJ36620@mdounin.ru> Message-ID: <4722328.iR2E42Mhk9@vbart-workstation> On Tuesday 10 May 2016 17:37:55 Artur wrote: > Thanks for your answer. > > Le 10/05/2016 ? 17:04, Maxim Dounin a ?crit : > > As long as you are using TCP/IP and have only one backend (ip + > > port), and only one local address on nginx side, then you are > > limited by the number of local ports nginx can use. > I currently have nginx running on the same host that my backends. > I have 4 of them listening on different ports on 127.0.0.1. > In this situation may I expect 4 times 65000 simultaneous connections ? > > - add more local addresses on nginx side and use proxy_bind to > > balance users between these addresses. > Yes, I've seen this, however I didn't catch how to dynamically assign a > value to proxy_bind from a pool of IP addresses in nginx (not Nginx Plus). > The all proxy_bind functionality is available in the open source version of nginx. See the docs: http://nginx.org/r/proxy_bind wbr, Valentin V. Bartenev From maxim at nginx.com Tue May 10 15:46:46 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Tue, 10 May 2016 18:46:46 +0300 Subject: Nginx as reverse proxy scalability In-Reply-To: References: <20160510150413.GJ36620@mdounin.ru> Message-ID: <49bc4708-3c16-5adf-6733-17fb56da95e1@nginx.com> On 5/10/16 6:37 PM, Artur wrote: [...] >> - add more local addresses on nginx side and use proxy_bind to >> balance users between these addresses. > Yes, I've seen this, however I didn't catch how to dynamically > assign a value to proxy_bind from a pool of IP addresses in nginx > (not Nginx Plus). The blog post I sent has nothing -plus specific in this area. -- Maxim Konovalov From nginx at netdirect.fr Tue May 10 15:49:05 2016 From: nginx at netdirect.fr (Artur) Date: Tue, 10 May 2016 17:49:05 +0200 Subject: Nginx as reverse proxy scalability In-Reply-To: <49bc4708-3c16-5adf-6733-17fb56da95e1@nginx.com> References: <20160510150413.GJ36620@mdounin.ru> <49bc4708-3c16-5adf-6733-17fb56da95e1@nginx.com> Message-ID: <7649552d-2d57-8423-5fb4-8317b92e6499@netdirect.fr> Le 10/05/2016 ? 17:46, Maxim Konovalov a ?crit : > The blog post I sent has nothing -plus specific in this area. That's perfect. Thank you all. -- Best regards, Artur. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue May 10 15:56:21 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 10 May 2016 18:56:21 +0300 Subject: Nginx as reverse proxy scalability In-Reply-To: References: <20160510150413.GJ36620@mdounin.ru> Message-ID: <20160510155620.GK36620@mdounin.ru> Hello! On Tue, May 10, 2016 at 05:37:55PM +0200, Artur wrote: > Thanks for your answer. > > Le 10/05/2016 ? 17:04, Maxim Dounin a ?crit : > > As long as you are using TCP/IP and have only one backend (ip + > > port), and only one local address on nginx side, then you are > > limited by the number of local ports nginx can use. > I currently have nginx running on the same host that my backends. > I have 4 of them listening on different ports on 127.0.0.1. > In this situation may I expect 4 times 65000 simultaneous connections ? Yes. (Note though, that this may not be true on all OSes. And this also won't be true when using proxy_bind, as OS will have to choose a local port before the destination address is known.) > > - add more local addresses on nginx side and use proxy_bind to > > balance users between these addresses. > Yes, I've seen this, however I didn't catch how to dynamically assign a > value to proxy_bind from a pool of IP addresses in nginx (not Nginx Plus). The blog post as linked by Maxim Konovalov uses the split_clients module, it's not nginx-plus specific. See here for details: http://nginx.org/en/docs/http/ngx_http_split_clients_module.html Depending on your particular case, there may be even easier solutions. E.g., if you have two URIs in your application with more or less equal load, you can use two locations with distinct addresses configured statically: location /one { proxy_pass http://127.0.0.1:8080; proxy_bind 127.0.0.2; } location /two { proxy_pass http://127.0.0.1:8080; proxy_bind 127.0.0.3; } -- Maxim Dounin http://nginx.org/ From ahall at autodist.com Tue May 10 16:18:10 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 10 May 2016 12:18:10 -0400 Subject: php5-fpm gives unknown script error In-Reply-To: References: <20160510122144.GJ9435@daoine.org> Message-ID: Just a note to say that I worked it out. I had my root set one level too deep, so the paths were rendering just fine but not matching up with where the root had them starting. Once I adjusted it, that problem went away. I'm still getting an error from OSTicket, but no longer am I seeing errors in my Nginx log. Incidentally, if anyone has gotten OSTicket to work under Nginx and wouldn't mind emailing me off list, I'd love to pick your brain about what might be going wrong with the installation. On Tue, May 10, 2016 at 9:21 AM, Alex Hall wrote: > > > On Tue, May 10, 2016 at 8:21 AM, Francis Daly wrote: > >> On Mon, May 09, 2016 at 05:52:47PM -0400, Alex Hall wrote: >> >> Hi there, >> >> without knowing anything about OSTicket... >> >> > FastCGI sent an stderr: unknown script while reading response header >> from >> > upstream >> >> That usually means that the fastcgi server could not find the file that >> it was looking for, based on what nginx told it. >> >> So: what http request do you make? >> >> What (php) file do you want the fastcgi server to use for this request? >> > > Looks like the request is for js/jquery.[number].js, > ROOT/PATHcss/flags.css, and other files. I'm not sure which php file it is, > since OSTicket is running its own installer. Well, the main file is > install.php according to the "referrer" of the request, but I don't know if > that's the whole story. > >> >> > The setup here is awkward (SSH >> > from Windows to Debian, with no SCP support currently and no way to copy >> > from or to the SSH session). I can copy files, but it's a bit of a >> process. >> >> As an aside: if you can copy/paste words between your ssh session and >> your windows command prompt, you can use "tar" and "base64", possibly >> also with "gzip", to copy files from one to the other. >> >> It can be annoying for large files, but anything that compresses down >> to some tens of kB is usually quite quick and easy. >> >> Cheers, >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Tue May 10 19:29:24 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 10 May 2016 15:29:24 -0400 Subject: timeout with UWSGI? Message-ID: Hi all, My Flask app is finally up and running, using UWSGI! This is great, but I'm hitting a problem if the app takes too long. It talks to an AS400, so can sometimes take a while to gather the requested information, parse it, and hand back JSON. Sometimes, by the time it does this, Nginx has given up on it, so I see an IOError reported in the log because of a broken pipe. I'm wondering how I can increase the timeout limit, to make Nginx wait longer before closing this connection? Thanks! -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 10 20:20:31 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 10 May 2016 21:20:31 +0100 Subject: Global denial for certain IPs or agents? In-Reply-To: References: Message-ID: <20160510202031.GK9435@daoine.org> On Tue, May 10, 2016 at 09:42:26AM -0400, Alex Hall wrote: Hi there, > I know how to do this, but how can I do it > globally? Currently I have to put the rules in each site's configuration > file, which is duplicating, which I'd like to avoid. http://nginx.org/r/allow says "Context: http, server, location, limit_except". So you can put your allow (and deny) directives at "http" level, and they will inherit into the appropriate location{} block (unless overridden elsewhere). (Or you could block access outside of nginx, by using a firewall or other network control device.) f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 10 20:30:40 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 10 May 2016 21:30:40 +0100 Subject: timeout with UWSGI? In-Reply-To: References: Message-ID: <20160510203040.GL9435@daoine.org> On Tue, May 10, 2016 at 03:29:24PM -0400, Alex Hall wrote: Hi there, > Sometimes, by the time it does this, Nginx has given up on > it, so I see an IOError reported in the log because of a broken pipe. I'm > wondering how I can increase the timeout limit, to make Nginx wait longer > before closing this connection? Thanks! What directive do you use to tell nginx to talk to the upstream service? Put that directive after "http://nginx.org/r/", and visit that url. That should bring you to the documentation for that directive. Go to the top of the page; there is a list of all of the other directives that that module provides. Probably you care about the "read timeout" directive: you can invite nginx to wait longer between successive successful read operations; or you can configure your upstream service to write something more frequently that it currently does. Good luck with it, f -- Francis Daly francis at daoine.org From sca at andreasschulze.de Tue May 10 20:33:32 2016 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 10 May 2016 22:33:32 +0200 Subject: Global denial for certain IPs or agents? In-Reply-To: <20160510202031.GK9435@daoine.org> References: <20160510202031.GK9435@daoine.org> Message-ID: <20160510223332.Horde.UE_YiZtoFsP_rTvluT1Sytr@andreasschulze.de> you could also include one file at all relevant places. nginx.conf: server { # settings for server1 include /path/to/include.file; } server { # settings for server2 include /path/to/include.file; } /path/to/include.file: allow from ip1; allow from cidr2; deny all; Andreas From francis at daoine.org Tue May 10 20:47:05 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 10 May 2016 21:47:05 +0100 Subject: (52) Empty reply from server In-Reply-To: References: <1927484.Lg9YNvfiLY@vbart-workstation> <3210444.EYQ3JPFaF7@vbart-workstation> Message-ID: <20160510204705.GM9435@daoine.org> On Mon, May 09, 2016 at 02:34:13PM +0530, Pankaj Chaudhary wrote: Hi there, > Its means nginx do not have any API as other server having to set header > and get header ? very strange ... You are correct. You've already shown code where you read headers sent by the client, and write headers sent to the client. It is very strange. > I need to write my own module to read and write cookies values? > My module is in written in C programming language. If you want to write a module in C, then your code will be in C. You can read the headers_in data structure, and write the headers_out data structure, as the examples show. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 10 21:09:05 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 10 May 2016 22:09:05 +0100 Subject: Replace apache with nginx In-Reply-To: References: Message-ID: <20160510210905.GN9435@daoine.org> On Thu, May 05, 2016 at 11:25:18AM +0300, Christian Ivanov wrote: Hi there, > Hello there. I want to migrate my website only on nginx. I setup > everything, but have issue with my htaccess. I'm not entirely sure of what specifically you are looking for here. Perhaps the secure link module (http://nginx.org/en/docs/http/ngx_http_secure_link_module.html) or other similar things can help? > Here is the file: > > [root at server]# cat .htaccess > > RewriteEngine On > RewriteRule ^audio-(.*)\.html$ %{ENV:BASE}/audio_redirect_mask.php [L] > RewriteRule ^s/(.*)$ %{ENV:BASE}/audio_redirect_source.php?u=$1 [L] > > [root at server]# > > Somebody to have good idea, how can I replace this htaccess and execute > rewrite, without mod_php with apache or php-fpm? Can you describe the intended behaviour here? As in, if I request /audio-file.html, what response should I get? Or alternatively: what request should I make, in order to be sent the content of the file /usr/local/nginx/html/audio-file.html? f -- Francis Daly francis at daoine.org From ahall at autodist.com Tue May 10 21:09:21 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 10 May 2016 17:09:21 -0400 Subject: timeout with UWSGI? In-Reply-To: <20160510203040.GL9435@daoine.org> References: <20160510203040.GL9435@daoine.org> Message-ID: We may be finding the problem... I'm not sure what you mean by 'upstream service'. I know I have an upstream context for PHP, which defines the socket file to use, but I don't have that for UWSGI. The sample configurations I found never said to include one, and I don't have a deep enough grasp of how all this works to know what to put in one for UWSGI. I'll certainly start looking this up, though. On Tue, May 10, 2016 at 4:30 PM, Francis Daly wrote: > On Tue, May 10, 2016 at 03:29:24PM -0400, Alex Hall wrote: > > Hi there, > > > Sometimes, by the time it does this, Nginx has given up on > > it, so I see an IOError reported in the log because of a broken pipe. I'm > > wondering how I can increase the timeout limit, to make Nginx wait longer > > before closing this connection? Thanks! > > What directive do you use to tell nginx to talk to the upstream service? > > Put that directive after "http://nginx.org/r/", and visit that url. That > should bring you to the documentation for that directive. > > Go to the top of the page; there is a list of all of the other directives > that that module provides. > > Probably you care about the "read timeout" directive: you can invite nginx > to wait longer between successive successful read operations; or you can > configure your upstream service to write something more frequently that > it currently does. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 11 05:39:14 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Wed, 11 May 2016 01:39:14 -0400 Subject: change client_body_buffer_size from 16K to 256K made the nginx logs size from 50M to 1G.. Message-ID: Hi Team, I always use below configuration to record the post date of my webserver (for security resaon) http { ... log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; log_format plog '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for" "$request_body"'; server { .... location ~ \.php$ { try_files $uri =404; if ($request_method = POST){ return 484; break; } error_page 484 = @post; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; fastcgi_pass backend; } location @post{ internal; access_log /web/log/post.log plog; try_files $uri =404; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; fastcgi_pass backend; } } } And today I found that the size the post.log was almost 1G everyday, in the past it's only 50M. The only thing I changed recently is the buffer_size >From : client_header_buffer_size 64k; large_client_header_buffers 4 32k; client_body_buffer_size 16k; client_max_body_size 50m; fastcgi_buffer_size 64k; fastcgi_buffers 4 64k; fastcgi_busy_buffers_size 128k; fastcgi_temp_file_write_size 128k; To: client_header_buffer_size 4k; large_client_header_buffers 4 32k; client_body_buffer_size 256k; client_max_body_size 8m; fastcgi_buffer_size 128k; fastcgi_buffers 256 16k; fastcgi_busy_buffers_size 256k; fastcgi_temp_file_write_size 256k; Is that because I changed client_body_buffer_size from 16K to 256K caused the size change of the log file? For now the client_body_buffer_size is big enough, when users upload a file, then nginx will put it into the buffer instead of a temp file, and then also write this file into post.log? Am I right? If I'm right, then how can I exclude file upload from the post log? The $request_uri for the upload is mod=swfupload. Can anyone help? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266793,266793#msg-266793 From pankajitbhu at gmail.com Wed May 11 06:49:14 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Wed, 11 May 2016 12:19:14 +0530 Subject: (52) Empty reply from server In-Reply-To: <20160510204705.GM9435@daoine.org> References: <1927484.Lg9YNvfiLY@vbart-workstation> <3210444.EYQ3JPFaF7@vbart-workstation> <20160510204705.GM9435@daoine.org> Message-ID: >You can read the headers_in data structure, and write the headers_out >data structure, as the examples show. I was able to write in headers_out data structure as i have shown my code already but i was not able to read from headers_in structure that is problem i am facing. I am looking solution for this only. I have followed the given example mentioned in below link. https://www.nginx.com/resources/wiki/start/topics/examples/headers_management/ i am not able to understand why this is happening. but if i can see headers_in structure have the value which i am trying to read. On Wed, May 11, 2016 at 2:17 AM, Francis Daly wrote: > On Mon, May 09, 2016 at 02:34:13PM +0530, Pankaj Chaudhary wrote: > > Hi there, > > > Its means nginx do not have any API as other server having to set header > > and get header ? very strange ... > > You are correct. > > You've already shown code where you read headers sent by the client, > and write headers sent to the client. > > It is very strange. > > > I need to write my own module to read and write cookies values? > > My module is in written in C programming language. > > If you want to write a module in C, then your code will be in C. > > You can read the headers_in data structure, and write the headers_out > data structure, as the examples show. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed May 11 07:24:15 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 11 May 2016 08:24:15 +0100 Subject: timeout with UWSGI? In-Reply-To: References: <20160510203040.GL9435@daoine.org> Message-ID: <20160511072415.GO9435@daoine.org> On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: Hi there, > We may be finding the problem... I'm not sure what you mean by 'upstream > service'. Somehow, you tell nginx to talk to "the next server". It can be with "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". "the next server" is the upstream, in this context. And the directive is whichever *_pass you use here. Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 11 07:36:21 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 11 May 2016 08:36:21 +0100 Subject: change client_body_buffer_size from 16K to 256K made the nginx logs size from 50M to 1G.. In-Reply-To: References: Message-ID: <20160511073621.GP9435@daoine.org> On Wed, May 11, 2016 at 01:39:14AM -0400, meteor8488 wrote: > log_format plog '$remote_addr - $remote_user [$time_local] "$request" ' > '$status $body_bytes_sent "$http_referer" ' > '"$http_user_agent" "$http_x_forwarded_for" "$request_body"'; What do you think that the last element of that log_format definition does? http://nginx.org/r/$request_body > Is that because I changed client_body_buffer_size from 16K to 256K caused > the size change of the log file? > For now the client_body_buffer_size is big enough, when users upload a file, > then nginx will put it into the buffer instead of a temp file, and then also > write this file into post.log? Am I right? Yes. > If I'm right, then how can I exclude file upload from the post log? The > $request_uri for the upload is mod=swfupload. If you don't want the request body logged, don't log the request body. If you don't want the request body logged for one $request_uri only, you can finish handling that in a specific location{} and use a different access_log there. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed May 11 11:07:11 2016 From: nginx-forum at forum.nginx.org (kostbad) Date: Wed, 11 May 2016 07:07:11 -0400 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: References: Message-ID: <5c55289e912cc9ed63f27724a432ffcb.NginxMailingListEnglish@forum.nginx.org> I updated nginx but the problem persists. Could it be some sort of misconfiguration of my nginx? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266590,266800#msg-266800 From nginx-forum at forum.nginx.org Wed May 11 11:24:09 2016 From: nginx-forum at forum.nginx.org (Sportmodus) Date: Wed, 11 May 2016 07:24:09 -0400 Subject: NGINX: Set/use requests with fully qualified domain name (= FQDN). Message-ID: <25528a8fcd7a156c10cc924119a25c0c.NginxMailingListEnglish@forum.nginx.org> Hi @ all. NGINX proxy_pass works great with our internal servers. Thanks for the good work. Now I want to proxy_pass an external server and I have to use our proxy for this to get an external connection. I checked with tcpdump and Wireshark. NGINX sends "GET /uri HTTP/1.1 HOST: external.com" to my upstream (= coporate proxy). Can I modify this to "GET http://external.com:80/uri HTTP/1.1 HOST: external.com" ? Problem is: Our proxy denies requests (error 400 bad request) without FQDN (= fully qualified domain name). Any help or workaround is welcome. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266802,266802#msg-266802 From nginx-forum at forum.nginx.org Wed May 11 11:26:08 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Wed, 11 May 2016 07:26:08 -0400 Subject: change client_body_buffer_size from 16K to 256K made the nginx logs size from 50M to 1G.. In-Reply-To: <20160511073621.GP9435@daoine.org> References: <20160511073621.GP9435@daoine.org> Message-ID: <5e0b3e1945f4840d8833b329066fbc38.NginxMailingListEnglish@forum.nginx.org> Thanks for your quickly response. One more question, for client_body_buffer_size 16K, if the $request_body >16K, it seems nginx will put the request body into a temp file, and then no logs in log file, even though I enabled the request log. Does that mean the best way to keep the post log is to enable client_body_in_file_only? But the thing is enable client_body_in_file_only will slow down nginx. So is there any better way to achieve that? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266793,266803#msg-266803 From luky-37 at hotmail.com Wed May 11 11:50:16 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Wed, 11 May 2016 13:50:16 +0200 Subject: ssl test causes nginx to crash (SSL_do_handshake() failed) In-Reply-To: <5c55289e912cc9ed63f27724a432ffcb.NginxMailingListEnglish@forum.nginx.org> References: , <5c55289e912cc9ed63f27724a432ffcb.NginxMailingListEnglish@forum.nginx.org> Message-ID: > I updated nginx but the problem persists. > > Could it be some sort of misconfiguration of my nginx? No, but I suggest you try reconfiguring your cipher suites?anyway to exclude anything kerberos related like previously suggested. Lukas From nginx-forum at forum.nginx.org Wed May 11 13:19:58 2016 From: nginx-forum at forum.nginx.org (meteor8488) Date: Wed, 11 May 2016 09:19:58 -0400 Subject: change client_body_buffer_size from 16K to 256K made the nginx logs size from 50M to 1G.. In-Reply-To: <20160511073621.GP9435@daoine.org> References: <20160511073621.GP9435@daoine.org> Message-ID: <7c94d627312f97eb69ae6de9d45121ba.NginxMailingListEnglish@forum.nginx.org> Hi all, I just updated my configuration files as following location ~ \.php$ { try_files $uri =404; if ($arg_mod = "upload" ) { return 485; break; } if ($request_method = POST){ return 484; break; } error_page 484 = @post; error_page 485 = @flash; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; fastcgi_pass backend; } location @post{ internal; access_log /web/log/post.log plog; try_files $uri =404; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; fastcgi_pass backend; } location @flash{ internal; access_log /web/log/flash.log main; try_files $uri =404; fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; include fastcgi_params; fastcgi_pass backend; } I'm using if to check whether user want to upload a file or not. But I know that if is evil, so how can achieve the same result without using if? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266793,266809#msg-266809 From vbart at nginx.com Wed May 11 13:49:50 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 11 May 2016 16:49:50 +0300 Subject: change client_body_buffer_size from 16K to 256K made the nginx logs size from 50M to 1G.. In-Reply-To: <7c94d627312f97eb69ae6de9d45121ba.NginxMailingListEnglish@forum.nginx.org> References: <20160511073621.GP9435@daoine.org> <7c94d627312f97eb69ae6de9d45121ba.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1548159.elkKVahKWp@vbart-workstation> On Wednesday 11 May 2016 09:19:58 meteor8488 wrote: > Hi all, I just updated my configuration files as following > > location ~ \.php$ { > try_files $uri =404; > if ($arg_mod = "upload" ) { > return 485; > break; > } > if ($request_method = POST){ > return 484; > break; > } > error_page 484 = @post; > error_page 485 = @flash; > fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > include fastcgi_params; > fastcgi_pass backend; > } > location @post{ > internal; > access_log /web/log/post.log plog; > try_files $uri =404; > fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > include fastcgi_params; > fastcgi_pass backend; > } > location @flash{ > internal; > access_log /web/log/flash.log main; > try_files $uri =404; > fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name; > include fastcgi_params; > fastcgi_pass backend; > } > > > I'm using if to check whether user want to upload a file or not. > > But I know that if is evil, so how can achieve the same result without using > if? > [..] Please, check the "if=" parameter of the "access_log" directive. See the docs: http://nginx.org/en/docs/http/ngx_http_log_module.html#access_log wbr, Valentin V. Bartenev From al-nginx at none.at Wed May 11 13:54:50 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 11 May 2016 15:54:50 +0200 Subject: timeout with UWSGI? In-Reply-To: <20160511072415.GO9435@daoine.org> References: <20160510203040.GL9435@daoine.org> <20160511072415.GO9435@daoine.org> Message-ID: Hi Alex, Am 11-05-2016 09:24, schrieb Francis Daly: > On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: > > Hi there, > >> We may be finding the problem... I'm not sure what you mean by >> 'upstream >> service'. > > Somehow, you tell nginx to talk to "the next server". It can be with > "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". > > "the next server" is the upstream, in this context. > > And the directive is whichever *_pass you use here. What Francis could mean could be this. http://nginx.org/en/docs/http/ngx_http_upstream_module.html Some basic concept. http://nginx.org/en/docs/http/load_balancing.html Maybe you should increase some timeouts in uwsgi http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html -> search for timeout. There are some info's in the net ;-) https://startpage.com/do/search?cmd=process_search&query=nginx+long+running+backend+requests&cat=web&with_language=&with_region=&pl=&ff=&rl=&hmb=1&with_date=y&abp=-1 Can you please add the output of nginx -V to your answer and the relevant config. Hth & Best regards aleks From ahall at autodist.com Wed May 11 14:01:07 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 10:01:07 -0400 Subject: timeout with UWSGI? In-Reply-To: References: <20160510203040.GL9435@daoine.org> <20160511072415.GO9435@daoine.org> Message-ID: Yes, I'm using uwsgi_pass: uwsgi_pass 127.0.0.1:9876; On Wed, May 11, 2016 at 9:54 AM, Aleksandar Lazic wrote: > Hi Alex, > > Am 11-05-2016 09:24, schrieb Francis Daly: > >> On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: >> >> Hi there, >> >> We may be finding the problem... I'm not sure what you mean by 'upstream >>> service'. >>> >> >> Somehow, you tell nginx to talk to "the next server". It can be with >> "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". >> >> "the next server" is the upstream, in this context. >> >> And the directive is whichever *_pass you use here. >> > > What Francis could mean could be this. > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html > > Some basic concept. > > http://nginx.org/en/docs/http/load_balancing.html > > Maybe you should increase some timeouts in uwsgi > > http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html > -> search for timeout. > > There are some info's in the net ;-) > > > https://startpage.com/do/search?cmd=process_search&query=nginx+long+running+backend+requests&cat=web&with_language=&with_region=&pl=&ff=&rl=&hmb=1&with_date=y&abp=-1 > > > Can you please add the output of > > nginx -V > > to your answer and the relevant config. > > Hth & Best regards > aleks > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From al-nginx at none.at Wed May 11 14:23:07 2016 From: al-nginx at none.at (Aleksandar Lazic) Date: Wed, 11 May 2016 16:23:07 +0200 Subject: timeout with UWSGI? In-Reply-To: References: <20160510203040.GL9435@daoine.org> <20160511072415.GO9435@daoine.org> Message-ID: <2773429f954de2ee4aa5aa3cb6e6d46a@none.at> Hi Alex. Am 11-05-2016 16:01, schrieb Alex Hall: > Yes, I'm using uwsgi_pass: > > uwsgi_pass 127.0.0.1:9876 [1]; Hm. I think you have send the mail to early ;-) Please can you read again my mail and add some more lines to your answer, thank you. Best regards aleks > On Wed, May 11, 2016 at 9:54 AM, Aleksandar Lazic > wrote: > >> Hi Alex, >> >> Am 11-05-2016 09:24, schrieb Francis Daly: >> On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: >> >> Hi there, >> >> We may be finding the problem... I'm not sure what you mean by >> 'upstream >> service'. >> >> Somehow, you tell nginx to talk to "the next server". It can be with >> "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". >> >> "the next server" is the upstream, in this context. >> >> And the directive is whichever *_pass you use here. > > What Francis could mean could be this. > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html > > Some basic concept. > > http://nginx.org/en/docs/http/load_balancing.html > > Maybe you should increase some timeouts in uwsgi > > http://nginx.org/en/docs/http/ngx_http_uwsgi_module.html > -> search for timeout. > > There are some info's in the net ;-) > > https://startpage.com/do/search?cmd=process_search&query=nginx+long+running+backend+requests&cat=web&with_language=&with_region=&pl=&ff=&rl=&hmb=1&with_date=y&abp=-1 > > Can you please add the output of > > nginx -V > > to your answer and the relevant config. > > Hth & Best regards > aleks From ahall at autodist.com Wed May 11 14:34:53 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 10:34:53 -0400 Subject: timeout with UWSGI? In-Reply-To: <20160511072415.GO9435@daoine.org> References: <20160510203040.GL9435@daoine.org> <20160511072415.GO9435@daoine.org> Message-ID: Ineeed. Sorry about that. The problem is that I can't capture the output of nginx -V, as--for some reason--the > symbol is only producing an empty file. I can't copy and paste directly from or to the prompt either. If you tell me what you're looking for in nginx -V, I can tell you that value, but there's a lot of information there. My site configuration file is: server { include restrictions/localOnly; listen 80; server_tokens off; server_name myapp.mysite.com; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9876; } location /static { alias /var/www/myapp/app/static; } } On Wed, May 11, 2016 at 3:24 AM, Francis Daly wrote: > On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: > > Hi there, > > > We may be finding the problem... I'm not sure what you mean by 'upstream > > service'. > > Somehow, you tell nginx to talk to "the next server". It can be with > "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". > > "the next server" is the upstream, in this context. > > And the directive is whichever *_pass you use here. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahutchings at nginx.com Wed May 11 14:44:23 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Wed, 11 May 2016 15:44:23 +0100 Subject: timeout with UWSGI? In-Reply-To: References: <20160510203040.GL9435@daoine.org> <20160511072415.GO9435@daoine.org> Message-ID: <89778603-8ee1-91c7-9919-51f7e6f8c232@nginx.com> Hi Alex, nginx -V uses stderr, so try: nginx -V 2>/tmp/out.txt This will redirect stderr to the file instead of stdout. Kind Regards Andrew On 11/05/16 15:34, Alex Hall wrote: > Ineeed. Sorry about that. The problem is that I can't capture the output > of nginx -V, as--for some reason--the > symbol is only producing an > empty file. I can't copy and paste directly from or to the prompt > either. If you tell me what you're looking for in nginx -V, I can tell > you that value, but there's a lot of information there. > > My site configuration file is: > > server { > include restrictions/localOnly; > listen 80; > server_tokens off; > server_name myapp.mysite.com ; > location / { > include uwsgi_params; > uwsgi_pass 127.0.0.1:9876 ; > } > > location /static { > alias /var/www/myapp/app/static; > } > > > } > > > > On Wed, May 11, 2016 at 3:24 AM, Francis Daly > wrote: > > On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: > > Hi there, > > > We may be finding the problem... I'm not sure what you mean by 'upstream > > service'. > > Somehow, you tell nginx to talk to "the next server". It can be with > "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". > > "the next server" is the upstream, in this context. > > And the directive is whichever *_pass you use here. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From ahall at autodist.com Wed May 11 14:47:43 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 10:47:43 -0400 Subject: timeout with UWSGI? In-Reply-To: <89778603-8ee1-91c7-9919-51f7e6f8c232@nginx.com> References: <20160510203040.GL9435@daoine.org> <20160511072415.GO9435@daoine.org> <89778603-8ee1-91c7-9919-51f7e6f8c232@nginx.com> Message-ID: Ah, that explains it. Thanks. nginx -V nginx version: nginx/1.6.2 TLS SNI support enabled configure arguments: --with-cc-opt='-g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2' --with-ld-opt=-Wl,-z,relro --prefix=/usr/share/nginx --conf-path=/etc/nginx/nginx.conf --http-log-path=/var/log/nginx/access.log --error-log-path=/var/log/nginx/error.log --lock-path=/var/lock/nginx.lock --pid-path=/run/nginx.pid --http-client-body-temp-path=/var/lib/nginx/body --http-fastcgi-temp-path=/var/lib/nginx/fastcgi --http-proxy-temp-path=/var/lib/nginx/proxy --http-scgi-temp-path=/var/lib/nginx/scgi --http-uwsgi-temp-path=/var/lib/nginx/uwsgi --with-debug --with-pcre-jit --with-ipv6 --with-http_ssl_module --with-http_stub_status_module --with-http_realip_module --with-http_auth_request_module --with-http_addition_module --with-http_dav_module --with-http_geoip_module --with-http_gzip_static_module --with-http_image_filter_module --with-http_spdy_module --with-http_sub_module --with-http_xslt_module --with-mail --with-mail_ssl_module --add-module=/build/nginx-T5fW9e/nginx-1.6.2/debian/modules/nginx-auth-pam --add-module=/build/nginx-T5fW9e/nginx-1.6.2/debian/modules/nginx-dav-ext-module --add-module=/build/nginx-T5fW9e/nginx-1.6.2/debian/modules/nginx-echo --add-module=/build/nginx-T5fW9e/nginx-1.6.2/debian/modules/nginx-upstream-fair --add-module=/build/nginx-T5fW9e/nginx-1.6.2/debian/modules/ngx_http_substitutions_filter_module On Wed, May 11, 2016 at 10:44 AM, Andrew Hutchings wrote: > Hi Alex, > > nginx -V uses stderr, so try: > > nginx -V 2>/tmp/out.txt > > This will redirect stderr to the file instead of stdout. > > Kind Regards > Andrew > > On 11/05/16 15:34, Alex Hall wrote: > >> Ineeed. Sorry about that. The problem is that I can't capture the output >> of nginx -V, as--for some reason--the > symbol is only producing an >> empty file. I can't copy and paste directly from or to the prompt >> either. If you tell me what you're looking for in nginx -V, I can tell >> you that value, but there's a lot of information there. >> >> My site configuration file is: >> >> server { >> include restrictions/localOnly; >> listen 80; >> server_tokens off; >> server_name myapp.mysite.com ; >> location / { >> include uwsgi_params; >> uwsgi_pass 127.0.0.1:9876 ; >> } >> >> location /static { >> alias /var/www/myapp/app/static; >> } >> >> >> } >> >> >> >> On Wed, May 11, 2016 at 3:24 AM, Francis Daly > > wrote: >> >> On Tue, May 10, 2016 at 05:09:21PM -0400, Alex Hall wrote: >> >> Hi there, >> >> > We may be finding the problem... I'm not sure what you mean by >> 'upstream >> > service'. >> >> Somehow, you tell nginx to talk to "the next server". It can be with >> "proxy_pass" or "fastcgi_pass" or, most likely here, "uwsgi_pass". >> >> "the next server" is the upstream, in this context. >> >> And the directive is whichever *_pass you use here. >> >> Cheers, >> >> f >> -- >> Francis Daly francis at daoine.org >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> >> -- >> Alex Hall >> Automatic Distributors, IT department >> ahall at autodist.com >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> > -- > Andrew Hutchings (LinuxJedi) > Technical Product Manager, NGINX Inc. > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Wed May 11 20:03:07 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 16:03:07 -0400 Subject: Anyone running OSTicket with Nginx? Message-ID: Hi all, I'm using Nginx (obviously), but I want to try OSTicket. The only supported servers for it are, for whatever reason, Apache and IIS. I hate IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx seems much simpler than Apache to me). Does anyone have OSTicket working under Nginx by any chance? I've followed this recipe: https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ but I can't get it to work. It hits a wall during installation, saying that it can't create configuration settings (#7). If anyone has this up and running successfully, I'd love to know how you did it. Hopefully the OSTicket team will eventually support Nginx natively, but I'm not holding my breath. -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Wed May 11 20:09:55 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Wed, 11 May 2016 23:09:55 +0300 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: Try my config for Osticket 1.7, nginx+php-fpm I create that's config just for testing server { listen 80; server_name test.com; access_log /var/log/nginx/tickets.access.log; error_log /var/log/nginx/tickets.error.log info; index index.php; root /var/www/ticket; client_max_body_size 5M; keepalive_timeout 0; fastcgi_read_timeout 120; fastcgi_send_timeout 60; index index.php index.html; autoindex off; gzip on; gzip_types text/plain text/css application/x-javascript text/javascript application/javascript application/json application/xml text/x-component application/rss+xml text/xml; sendfile on; set $path_info ""; location ~ /include { deny all; return 403; } if ($request_uri ~ "^/api(/[^\?]+)") { set $path_info $1; } location ~ ^/api/(?:tickets|tasks).*$ { try_files $uri $uri/ /api/http.php?$query_string; } if ($request_uri ~ "^/scp/.*\.php(/[^\?]+)") { set $path_info $1; } location ~ ^/scp/ajax.php/.*$ { try_files $uri $uri/ /scp/ajax.php?$query_string; } location / { try_files $uri $uri/ index.php; } location ~ \.php$ { fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; fastcgi_pass unix:/var/run/php-fpm.sock; fastcgi_index index.php; include fastcgi_params; fastcgi_param PATH_INFO $path_info; fastcgi_intercept_errors on; } } 2016-05-11 23:03 GMT+03:00 Alex Hall : > Hi all, > I'm using Nginx (obviously), but I want to try OSTicket. The only > supported servers for it are, for whatever reason, Apache and IIS. I hate > IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx > seems much simpler than Apache to me). Does anyone have OSTicket working > under Nginx by any chance? > > I've followed this recipe: > https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ > but I can't get it to work. It hits a wall during installation, saying > that it can't create configuration settings (#7). If anyone has this up and > running successfully, I'd love to know how you did it. Hopefully the > OSTicket team will eventually support Nginx natively, but I'm not holding > my breath. > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Wed May 11 20:26:17 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 16:26:17 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: Thanks for the quick response. Unfortunately, I'm not having any luck, unless I mistyped one of the rules. I also can't find where errors go. Anyone know where, or if, errors in fastcgi/php5-fpm are logged? /var/log/php5-fpm.log is empty. On Wed, May 11, 2016 at 4:09 PM, Yuriy Medvedev wrote: > Try my config for Osticket 1.7, nginx+php-fpm > I create that's config just for testing > server { > listen 80; > server_name test.com; > access_log /var/log/nginx/tickets.access.log; > error_log /var/log/nginx/tickets.error.log info; > index index.php; > root /var/www/ticket; > client_max_body_size 5M; > keepalive_timeout 0; > fastcgi_read_timeout 120; > fastcgi_send_timeout 60; > index index.php index.html; > autoindex off; > > gzip on; > gzip_types text/plain text/css application/x-javascript text/javascript > application/javascript application/json application/xml text/x-component > application/rss+xml text/xml; > sendfile on; > set $path_info ""; > > location ~ /include { > deny all; > return 403; > } > > if ($request_uri ~ "^/api(/[^\?]+)") { > set $path_info $1; > } > > location ~ ^/api/(?:tickets|tasks).*$ { > try_files $uri $uri/ /api/http.php?$query_string; > } > > if ($request_uri ~ "^/scp/.*\.php(/[^\?]+)") { > set $path_info $1; > } > > location ~ ^/scp/ajax.php/.*$ { > try_files $uri $uri/ /scp/ajax.php?$query_string; > } > > location / { > try_files $uri $uri/ index.php; > } > > > location ~ \.php$ { > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > fastcgi_pass unix:/var/run/php-fpm.sock; > fastcgi_index index.php; > include fastcgi_params; > fastcgi_param PATH_INFO $path_info; > fastcgi_intercept_errors on; > } > } > > 2016-05-11 23:03 GMT+03:00 Alex Hall : > >> Hi all, >> I'm using Nginx (obviously), but I want to try OSTicket. The only >> supported servers for it are, for whatever reason, Apache and IIS. I hate >> IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx >> seems much simpler than Apache to me). Does anyone have OSTicket working >> under Nginx by any chance? >> >> I've followed this recipe: >> https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ >> but I can't get it to work. It hits a wall during installation, saying >> that it can't create configuration settings (#7). If anyone has this up and >> running successfully, I'd love to know how you did it. Hopefully the >> OSTicket team will eventually support Nginx natively, but I'm not holding >> my breath. >> >> -- >> Alex Hall >> Automatic Distributors, IT department >> ahall at autodist.com >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Wed May 11 20:50:39 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 11 May 2016 16:50:39 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: Hello, > On May 11, 2016, at 4:26 PM, Alex Hall wrote: > > Thanks for the quick response. Unfortunately, I'm not having any luck, unless I mistyped one of the rules. I also can't find where errors go. Anyone know where, or if, errors in fastcgi/php5-fpm are logged? /var/log/php5-fpm.log is empty. > >> On Wed, May 11, 2016 at 4:09 PM, Yuriy Medvedev wrote: >> Try my config for Osticket 1.7, nginx+php-fpm >> I create that's config just for testing >> server { >> listen 80; >> server_name test.com; >> access_log /var/log/nginx/tickets.access.log; >> error_log /var/log/nginx/tickets.error.log info; >> index index.php; >> root /var/www/ticket; >> client_max_body_size 5M; >> keepalive_timeout 0; >> fastcgi_read_timeout 120; >> fastcgi_send_timeout 60; >> index index.php index.html; >> autoindex off; >> >> gzip on; >> gzip_types text/plain text/css application/x-javascript text/javascript application/javascript application/json application/xml text/x-component application/rss+xml text/xml; >> sendfile on; >> set $path_info ""; >> >> location ~ /include { >> deny all; >> return 403; >> } >> >> if ($request_uri ~ "^/api(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/api/(?:tickets|tasks).*$ { >> try_files $uri $uri/ /api/http.php?$query_string; >> } >> >> if ($request_uri ~ "^/scp/.*\.php(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/scp/ajax.php/.*$ { >> try_files $uri $uri/ /scp/ajax.php?$query_string; >> } >> >> location / { >> try_files $uri $uri/ index.php; >> } >> >> >> location ~ \.php$ { >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_pass unix:/var/run/php-fpm.sock; >> fastcgi_index index.php; >> include fastcgi_params; >> fastcgi_param PATH_INFO $path_info; >> fastcgi_intercept_errors on; >> } >> } >> >> 2016-05-11 23:03 GMT+03:00 Alex Hall : >>> Hi all, >>> I'm using Nginx (obviously), but I want to try OSTicket. The only supported servers for it are, for whatever reason, Apache and IIS. I hate IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx seems much simpler than Apache to me). Does anyone have OSTicket working under Nginx by any chance? >>> >>> I've followed this recipe: >>> https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ >>> but I can't get it to work. It hits a wall during installation, saying that it can't create configuration settings (#7). If anyone has this up and running successfully, I'd love to know how you did it. Hopefully the OSTicket team will eventually support Nginx natively, but I'm not holding my breath. Is it possible that your script is trying to write a configuration file and lacks proper permission in that directory? Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Wed May 11 20:54:58 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 16:54:58 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: On Wed, May 11, 2016 at 4:50 PM, Jim Ohlstein wrote: > Hello, > > > Is it possible that your script is trying to write a configuration file and lacks proper > permission in that directory? Yes, very possible, and I'd even say likely. The thing is, I can't find out what the directory is. I've given permission to the entire folder: chown www-data /var/www/osticket chmod -R 777 /var/www/osticket But that doesn't seem to help. I'm new to Linux, though, so I may have missed something. I can't imagine where else it would be trying to write to. > On May 11, 2016, at 4:26 PM, Alex Hall wrote: > > Thanks for the quick response. Unfortunately, I'm not having any luck, > unless I mistyped one of the rules. I also can't find where errors go. > Anyone know where, or if, errors in fastcgi/php5-fpm are logged? > /var/log/php5-fpm.log is empty. > > On Wed, May 11, 2016 at 4:09 PM, Yuriy Medvedev > wrote: > >> Try my config for Osticket 1.7, nginx+php-fpm >> I create that's config just for testing >> server { >> listen 80; >> server_name test.com; >> access_log /var/log/nginx/tickets.access.log; >> error_log /var/log/nginx/tickets.error.log info; >> index index.php; >> root /var/www/ticket; >> client_max_body_size 5M; >> keepalive_timeout 0; >> fastcgi_read_timeout 120; >> fastcgi_send_timeout 60; >> index index.php index.html; >> autoindex off; >> >> gzip on; >> gzip_types text/plain text/css application/x-javascript text/javascript >> application/javascript application/json application/xml text/x-component >> application/rss+xml text/xml; >> sendfile on; >> set $path_info ""; >> >> location ~ /include { >> deny all; >> return 403; >> } >> >> if ($request_uri ~ "^/api(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/api/(?:tickets|tasks).*$ { >> try_files $uri $uri/ /api/http.php?$query_string; >> } >> >> if ($request_uri ~ "^/scp/.*\.php(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/scp/ajax.php/.*$ { >> try_files $uri $uri/ /scp/ajax.php?$query_string; >> } >> >> location / { >> try_files $uri $uri/ index.php; >> } >> >> >> location ~ \.php$ { >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_pass unix:/var/run/php-fpm.sock; >> fastcgi_index index.php; >> include fastcgi_params; >> fastcgi_param PATH_INFO $path_info; >> fastcgi_intercept_errors on; >> } >> } >> >> 2016-05-11 23:03 GMT+03:00 Alex Hall : >> >>> Hi all, >>> I'm using Nginx (obviously), but I want to try OSTicket. The only >>> supported servers for it are, for whatever reason, Apache and IIS. I hate >>> IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx >>> seems much simpler than Apache to me). Does anyone have OSTicket working >>> under Nginx by any chance? >>> >>> I've followed this recipe: >>> https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ >>> but I can't get it to work. It hits a wall during installation, saying >>> that it can't create configuration settings (#7). If anyone has this up and >>> running successfully, I'd love to know how you did it. Hopefully the >>> OSTicket team will eventually support Nginx natively, but I'm not holding >>> my breath. >>> >> > Jim > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Wed May 11 20:55:54 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Wed, 11 May 2016 23:55:54 +0300 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: You can configure it In configuration of pool php_admin_value[error_log] = /var/log/tes2.ex.com-fpm-php-error.log 2016-05-11 23:26 GMT+03:00 Alex Hall : > Thanks for the quick response. Unfortunately, I'm not having any luck, > unless I mistyped one of the rules. I also can't find where errors go. > Anyone know where, or if, errors in fastcgi/php5-fpm are logged? > /var/log/php5-fpm.log is empty. > > On Wed, May 11, 2016 at 4:09 PM, Yuriy Medvedev > wrote: > >> Try my config for Osticket 1.7, nginx+php-fpm >> I create that's config just for testing >> server { >> listen 80; >> server_name test.com; >> access_log /var/log/nginx/tickets.access.log; >> error_log /var/log/nginx/tickets.error.log info; >> index index.php; >> root /var/www/ticket; >> client_max_body_size 5M; >> keepalive_timeout 0; >> fastcgi_read_timeout 120; >> fastcgi_send_timeout 60; >> index index.php index.html; >> autoindex off; >> >> gzip on; >> gzip_types text/plain text/css application/x-javascript text/javascript >> application/javascript application/json application/xml text/x-component >> application/rss+xml text/xml; >> sendfile on; >> set $path_info ""; >> >> location ~ /include { >> deny all; >> return 403; >> } >> >> if ($request_uri ~ "^/api(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/api/(?:tickets|tasks).*$ { >> try_files $uri $uri/ /api/http.php?$query_string; >> } >> >> if ($request_uri ~ "^/scp/.*\.php(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/scp/ajax.php/.*$ { >> try_files $uri $uri/ /scp/ajax.php?$query_string; >> } >> >> location / { >> try_files $uri $uri/ index.php; >> } >> >> >> location ~ \.php$ { >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_pass unix:/var/run/php-fpm.sock; >> fastcgi_index index.php; >> include fastcgi_params; >> fastcgi_param PATH_INFO $path_info; >> fastcgi_intercept_errors on; >> } >> } >> >> 2016-05-11 23:03 GMT+03:00 Alex Hall : >> >>> Hi all, >>> I'm using Nginx (obviously), but I want to try OSTicket. The only >>> supported servers for it are, for whatever reason, Apache and IIS. I hate >>> IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx >>> seems much simpler than Apache to me). Does anyone have OSTicket working >>> under Nginx by any chance? >>> >>> I've followed this recipe: >>> https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ >>> but I can't get it to work. It hits a wall during installation, saying >>> that it can't create configuration settings (#7). If anyone has this up and >>> running successfully, I'd love to know how you did it. Hopefully the >>> OSTicket team will eventually support Nginx natively, but I'm not holding >>> my breath. >>> >>> -- >>> Alex Hall >>> Automatic Distributors, IT department >>> ahall at autodist.com >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Wed May 11 21:02:03 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Thu, 12 May 2016 00:02:03 +0300 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: example of php-fpm pool configuration user = www-data group = www-data pm = dynamic pm.max_children = 50 pm.start_servers = 20 pm.min_spare_servers = 5 pm.max_spare_servers = 35 pm.status_path = /fpm-status ping.path = /fpm-ping ping.response = pong chdir = /var/www/test.ex.com catch_workers_output = yes request_terminate_timeout = 180s php_admin_value[error_log] = /var/log/tes2.ex.com-fpm-php-error.log php_admin_value[max_execution_time] = 180 php_admin_flag[log_errors] = on php_admin_value[memory_limit] = 320m php_admin_value[error_reporting] = E_ALL php_admin_flag[display_errors] = on php_admin_flag[display_startup_errors] = on 2016-05-11 23:26 GMT+03:00 Alex Hall : > Thanks for the quick response. Unfortunately, I'm not having any luck, > unless I mistyped one of the rules. I also can't find where errors go. > Anyone know where, or if, errors in fastcgi/php5-fpm are logged? > /var/log/php5-fpm.log is empty. > > On Wed, May 11, 2016 at 4:09 PM, Yuriy Medvedev > wrote: > >> Try my config for Osticket 1.7, nginx+php-fpm >> I create that's config just for testing >> server { >> listen 80; >> server_name test.com; >> access_log /var/log/nginx/tickets.access.log; >> error_log /var/log/nginx/tickets.error.log info; >> index index.php; >> root /var/www/ticket; >> client_max_body_size 5M; >> keepalive_timeout 0; >> fastcgi_read_timeout 120; >> fastcgi_send_timeout 60; >> index index.php index.html; >> autoindex off; >> >> gzip on; >> gzip_types text/plain text/css application/x-javascript text/javascript >> application/javascript application/json application/xml text/x-component >> application/rss+xml text/xml; >> sendfile on; >> set $path_info ""; >> >> location ~ /include { >> deny all; >> return 403; >> } >> >> if ($request_uri ~ "^/api(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/api/(?:tickets|tasks).*$ { >> try_files $uri $uri/ /api/http.php?$query_string; >> } >> >> if ($request_uri ~ "^/scp/.*\.php(/[^\?]+)") { >> set $path_info $1; >> } >> >> location ~ ^/scp/ajax.php/.*$ { >> try_files $uri $uri/ /scp/ajax.php?$query_string; >> } >> >> location / { >> try_files $uri $uri/ index.php; >> } >> >> >> location ~ \.php$ { >> fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; >> fastcgi_pass unix:/var/run/php-fpm.sock; >> fastcgi_index index.php; >> include fastcgi_params; >> fastcgi_param PATH_INFO $path_info; >> fastcgi_intercept_errors on; >> } >> } >> >> 2016-05-11 23:03 GMT+03:00 Alex Hall : >> >>> Hi all, >>> I'm using Nginx (obviously), but I want to try OSTicket. The only >>> supported servers for it are, for whatever reason, Apache and IIS. I hate >>> IIS, and I don't know how I'd run Apache and Nginx together (plus Nginx >>> seems much simpler than Apache to me). Does anyone have OSTicket working >>> under Nginx by any chance? >>> >>> I've followed this recipe: >>> https://www.nginx.com/resources/wiki/start/topics/recipes/osticket/ >>> but I can't get it to work. It hits a wall during installation, saying >>> that it can't create configuration settings (#7). If anyone has this up and >>> running successfully, I'd love to know how you did it. Hopefully the >>> OSTicket team will eventually support Nginx natively, but I'm not holding >>> my breath. >>> >>> -- >>> Alex Hall >>> Automatic Distributors, IT department >>> ahall at autodist.com >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Wed May 11 21:04:24 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 11 May 2016 17:04:24 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: Hello, > On May 11, 2016, at 4:54 PM, Alex Hall wrote: > > > >> On Wed, May 11, 2016 at 4:50 PM, Jim Ohlstein wrote: >> Hello, > > Is it possible that your script is trying to write a configuration file and lacks proper > permission in that directory? > > Yes, very possible, and I'd even say likely. The thing is, I can't find out what the directory is. I've given permission to the entire folder: > chown www-data /var/www/osticket > chmod -R 777 /var/www/osticket > > But that doesn't seem to help. I'm new to Linux, though, so I may have missed something. I can't imagine where else it would be trying to write to. Is that the php-fpm user? If so, it's probably not the problem. Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 11 21:12:06 2016 From: nginx-forum at forum.nginx.org (tom.b) Date: Wed, 11 May 2016 17:12:06 -0400 Subject: limit_req_zone Message-ID: Greetings fellow nginx users, Is the limit_req_zone module included in the core version ? I't doesn't show up when I list the modules via nginx -V although when I add the directive into my nginx configuration it appears works fine and limit requests I am using 1.8.1-1.el6.ngx installed from official nginx repositories http://nginx.org/packages/centos/6/$basearch/ and was surprised that the directive works fine with out needing a custom build. Is the limit_req_zone directive officially part of nginx core functionality ? Cheers, Tom. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266845,266845#msg-266845 From ahall at autodist.com Wed May 11 21:19:34 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 17:19:34 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: On Wed, May 11, 2016 at 5:04 PM, Jim Ohlstein wrote: > Hello, > > On May 11, 2016, at 4:54 PM, Alex Hall wrote: > > > > On Wed, May 11, 2016 at 4:50 PM, Jim Ohlstein wrote: > >> Hello, >> >> > Is it possible that your script is trying to write a configuration file > and lacks proper > permission in that directory? > > Yes, very possible, and I'd even say likely. The thing is, I can't find > out what the directory is. I've given permission to the entire folder: > chown www-data /var/www/osticket > chmod -R 777 /var/www/osticket > > But that doesn't seem to help. I'm new to Linux, though, so I may have > missed something. I can't imagine where else it would be trying to write to. > > > Is that the php-fpm user? If so, it's probably not the problem. > I'm not sure. I don't find 'user' anywhere in php-fpm.conf, so I'm not sure where that gets set. www-data is the user for Nginx and UWSGI, though, not that UWSGI has anything to do here. > > Jim > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From rpaprocki at fearnothingproductions.net Wed May 11 21:27:24 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Wed, 11 May 2016 14:27:24 -0700 Subject: limit_req_zone In-Reply-To: References: Message-ID: This module is built by default, and does not need to be explicitly enabled. Thus you will not see it as part of the configure options. On Wed, May 11, 2016 at 2:12 PM, tom.b wrote: > Greetings fellow nginx users, > > Is the limit_req_zone module included in the core version ? I't doesn't > show > up when I list the modules via nginx -V although when I add the directive > into my nginx configuration it appears works fine and limit requests > > I am using 1.8.1-1.el6.ngx installed from official nginx repositories > http://nginx.org/packages/centos/6/$basearch/ and was surprised that the > directive works fine with out needing a custom build. > > Is the limit_req_zone directive officially part of nginx core functionality > ? > > Cheers, > > Tom. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,266845,266845#msg-266845 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Wed May 11 21:31:49 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 11 May 2016 17:31:49 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: Hello, > On May 11, 2016, at 5:19 PM, Alex Hall wrote: > > > >> On Wed, May 11, 2016 at 5:04 PM, Jim Ohlstein wrote: >> Hello, >> >> On May 11, 2016, at 4:54 PM, Alex Hall wrote: >> >>> >>> >>>> On Wed, May 11, 2016 at 4:50 PM, Jim Ohlstein wrote: >>>> Hello, >>> > Is it possible that your script is trying to write a configuration file and lacks proper > permission in that directory? >>> >>> Yes, very possible, and I'd even say likely. The thing is, I can't find out what the directory is. I've given permission to the entire folder: >>> chown www-data /var/www/osticket >>> chmod -R 777 /var/www/osticket >>> >>> But that doesn't seem to help. I'm new to Linux, though, so I may have missed something. I can't imagine where else it would be trying to write to. >> >> Is that the php-fpm user? If so, it's probably not the problem. > > I'm not sure. I don't find 'user' anywhere in php-fpm.conf, so I'm not sure where that gets set. www-data is the user for Nginx and UWSGI, though, not that UWSGI has anything to do here. What's the output of # ps aux | grep php Jim -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Wed May 11 21:34:52 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Thu, 12 May 2016 00:34:52 +0300 Subject: limit_req_zone In-Reply-To: References: Message-ID: You can see in source/auto/options by default HTTP_LIMIT_CONN=YES HTTP_LIMIT_REQ=YES 2016-05-12 0:12 GMT+03:00 tom.b : > Greetings fellow nginx users, > > Is the limit_req_zone module included in the core version ? I't doesn't > show > up when I list the modules via nginx -V although when I add the directive > into my nginx configuration it appears works fine and limit requests > > I am using 1.8.1-1.el6.ngx installed from official nginx repositories > http://nginx.org/packages/centos/6/$basearch/ and was surprised that the > directive works fine with out needing a custom build. > > Is the limit_req_zone directive officially part of nginx core functionality > ? > > Cheers, > > Tom. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,266845,266845#msg-266845 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Wed May 11 21:36:53 2016 From: ahall at autodist.com (Alex Hall) Date: Wed, 11 May 2016 17:36:53 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: The output of sudo ps aux | grep php admin 16599 0.0 0.0 12728 2164 pts/0 S+ 21:34 0:00 grep php root 18852 0.0 0.6 295568 24484 ? Ss May09 0:08 php-fpm: master process (/etc/php5/fpm/php-fpm.conf) www-data 18857 0.0 0.7 300736 29408 ? S May09 0:00 php-fpm: pool www www-data 18858 0.0 0.7 300732 29380 ? S May09 0:00 php-fpm: pool www I have no idea what any of that means. :) As mentioned, www-data is the Nginx user, and I'm logged in as admin over SSH. On Wed, May 11, 2016 at 5:31 PM, Jim Ohlstein wrote: > Hello, > > On May 11, 2016, at 5:19 PM, Alex Hall wrote: > > > > On Wed, May 11, 2016 at 5:04 PM, Jim Ohlstein wrote: > >> Hello, >> >> On May 11, 2016, at 4:54 PM, Alex Hall wrote: >> >> >> >> On Wed, May 11, 2016 at 4:50 PM, Jim Ohlstein wrote: >> >>> Hello, >>> >>> > Is it possible that your script is trying to write a configuration >> file and lacks proper > permission in that directory? >> >> Yes, very possible, and I'd even say likely. The thing is, I can't find >> out what the directory is. I've given permission to the entire folder: >> chown www-data /var/www/osticket >> chmod -R 777 /var/www/osticket >> >> But that doesn't seem to help. I'm new to Linux, though, so I may have >> missed something. I can't imagine where else it would be trying to write to. >> >> >> Is that the php-fpm user? If so, it's probably not the problem. >> > > I'm not sure. I don't find 'user' anywhere in php-fpm.conf, so I'm not > sure where that gets set. www-data is the user for Nginx and UWSGI, though, > not that UWSGI has anything to do here. > > > What's the output of > # ps aux | grep php > > Jim > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Wed May 11 21:39:45 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Thu, 12 May 2016 00:39:45 +0300 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: Configuration of pool in /etc/php5/fpm/pool.d/ in Ubuntu/Debian. 2016-05-12 0:36 GMT+03:00 Alex Hall : > The output of > sudo ps aux | grep php > > admin 16599 0.0 0.0 12728 2164 pts/0 S+ 21:34 0:00 grep php > root 18852 0.0 0.6 295568 24484 ? Ss May09 0:08 php-fpm: > master process (/etc/php5/fpm/php-fpm.conf) > www-data 18857 0.0 0.7 300736 29408 ? S May09 0:00 php-fpm: > pool www > www-data 18858 0.0 0.7 300732 29380 ? S May09 0:00 php-fpm: > pool www > > I have no idea what any of that means. :) As mentioned, www-data is the > Nginx user, and I'm logged in as admin over SSH. > > > On Wed, May 11, 2016 at 5:31 PM, Jim Ohlstein wrote: > >> Hello, >> >> On May 11, 2016, at 5:19 PM, Alex Hall wrote: >> >> >> >> On Wed, May 11, 2016 at 5:04 PM, Jim Ohlstein wrote: >> >>> Hello, >>> >>> On May 11, 2016, at 4:54 PM, Alex Hall wrote: >>> >>> >>> >>> On Wed, May 11, 2016 at 4:50 PM, Jim Ohlstein wrote: >>> >>>> Hello, >>>> >>>> > Is it possible that your script is trying to write a configuration >>> file and lacks proper > permission in that directory? >>> >>> Yes, very possible, and I'd even say likely. The thing is, I can't find >>> out what the directory is. I've given permission to the entire folder: >>> chown www-data /var/www/osticket >>> chmod -R 777 /var/www/osticket >>> >>> But that doesn't seem to help. I'm new to Linux, though, so I may have >>> missed something. I can't imagine where else it would be trying to write to. >>> >>> >>> Is that the php-fpm user? If so, it's probably not the problem. >>> >> >> I'm not sure. I don't find 'user' anywhere in php-fpm.conf, so I'm not >> sure where that gets set. www-data is the user for Nginx and UWSGI, though, >> not that UWSGI has anything to do here. >> >> >> What's the output of >> # ps aux | grep php >> >> Jim >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim at ohlste.in Wed May 11 21:50:08 2016 From: jim at ohlste.in (Jim Ohlstein) Date: Wed, 11 May 2016 17:50:08 -0400 Subject: Anyone running OSTicket with Nginx? In-Reply-To: References: Message-ID: <023A0101-95E4-45E5-8134-2CEFED52F0A4@ohlste.in> Hello, > On May 11, 2016, at 5:36 PM, Alex Hall wrote: > > The output of > sudo ps aux | grep php > > admin 16599 0.0 0.0 12728 2164 pts/0 S+ 21:34 0:00 grep php > root 18852 0.0 0.6 295568 24484 ? Ss May09 0:08 php-fpm: master process (/etc/php5/fpm/php-fpm.conf) > www-data 18857 0.0 0.7 300736 29408 ? S May09 0:00 php-fpm: pool www > www-data 18858 0.0 0.7 300732 29380 ? S May09 0:00 php-fpm: pool www > > I have no idea what any of that means. :) As mentioned, www-data is the Nginx user, and I'm logged in as admin over SSH. It means, among other things, that was-data is the php-fpm user. Both php-fpm and nginx should be able to write to any writeable directory owned by that user. Jim From nginx-forum at forum.nginx.org Wed May 11 22:06:48 2016 From: nginx-forum at forum.nginx.org (tom.b) Date: Wed, 11 May 2016 18:06:48 -0400 Subject: limit_req_zone In-Reply-To: References: Message-ID: <93058d048648cf36418f4039e1764154.NginxMailingListEnglish@forum.nginx.org> Thank you Robert Cheers, Tom. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266845,266853#msg-266853 From nginx-forum at forum.nginx.org Wed May 11 22:07:17 2016 From: nginx-forum at forum.nginx.org (tom.b) Date: Wed, 11 May 2016 18:07:17 -0400 Subject: limit_req_zone In-Reply-To: References: Message-ID: <4e477a2ead8032cf1d49a09fb2554ef5.NginxMailingListEnglish@forum.nginx.org> Thank you Yuriy, Cheers, Tom. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266845,266854#msg-266854 From francis at daoine.org Wed May 11 22:28:10 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 11 May 2016 23:28:10 +0100 Subject: NGINX: Set/use requests with fully qualified domain name (= FQDN). In-Reply-To: <25528a8fcd7a156c10cc924119a25c0c.NginxMailingListEnglish@forum.nginx.org> References: <25528a8fcd7a156c10cc924119a25c0c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160511222810.GQ9435@daoine.org> On Wed, May 11, 2016 at 07:24:09AM -0400, Sportmodus wrote: Hi there, > Now I want to proxy_pass an external server and I have to use our proxy for > this to get an external connection. Stock nginx does not speak proxy-http to a proxy server. > Can I modify this to > > "GET http://external.com:80/uri HTTP/1.1 > HOST: external.com" > > ? Only if you arrange for the code to do it be written. If you choose to write it fully, it might be welcomed into stock nginx. > Problem is: Our proxy denies requests (error 400 bad request) without FQDN > (= fully qualified domain name). Your proxy is correct to do so. It just expects a protocol that nginx does not speak. > Any help or workaround is welcome. Use something that can be a proxy client, and use it instead of (or as well as) nginx. Or, since nginx is a reverse proxy, and you only reverse proxy for things you control, change your policy so that nginx can talk to your external web server without going through the proxy. I don't think that there is a "good" answer. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu May 12 09:36:33 2016 From: nginx-forum at forum.nginx.org (RT.Nat) Date: Thu, 12 May 2016 05:36:33 -0400 Subject: DNS Caching Issue For community version Message-ID: <7676d5ac029d766e6bf418bf74ad3f3f.NginxMailingListEnglish@forum.nginx.org> Troubled up in DNS caching of the IP address for a given DNS name. Adding the solution given below does not solve the problem of DNS caching in NGINX. resolver 8.8.8.8; set $upstream_endpoint https://example.net:8080; location / { proxy_pass $upstream_endpoint; } Is there any solutions? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266857#msg-266857 From unixro at gmail.com Thu May 12 09:51:36 2016 From: unixro at gmail.com (Mihai Vintila) Date: Thu, 12 May 2016 12:51:36 +0300 Subject: DNS Caching Issue For community version In-Reply-To: <7676d5ac029d766e6bf418bf74ad3f3f.NginxMailingListEnglish@forum.nginx.org> References: <7676d5ac029d766e6bf418bf74ad3f3f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <62a62ca7-39d6-ecb7-87a3-fd63030b3a88@gmail.com> From Manual: By default, nginx caches answers using the TTL value of a response. An optional|valid|parameter allows overriding it: resolver 127.0.0.1 [::1]:5353 valid=30s; Before version 1.1.9, tuning of caching time was not possible, and nginx always cached answers for the duration of 5 minutes. Best regards, Vintila Mihai Alexandru On 5/12/2016 12:36 PM, RT.Nat wrote: > Troubled up in DNS caching of the IP address for a given DNS name. > > Adding the solution given below does not solve the problem of DNS caching in > NGINX. > > resolver 8.8.8.8; > set $upstream_endpoint https://example.net:8080; > location / { > proxy_pass $upstream_endpoint; > } > > Is there any solutions? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266857#msg-266857 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 12 10:01:41 2016 From: nginx-forum at forum.nginx.org (RT.Nat) Date: Thu, 12 May 2016 06:01:41 -0400 Subject: DNS Caching Issue For community version In-Reply-To: <62a62ca7-39d6-ecb7-87a3-fd63030b3a88@gmail.com> References: <62a62ca7-39d6-ecb7-87a3-fd63030b3a88@gmail.com> Message-ID: Even adding the valid parameter the issue was not solved. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266859#msg-266859 From luky-37 at hotmail.com Thu May 12 10:04:24 2016 From: luky-37 at hotmail.com (Lukas Tribus) Date: Thu, 12 May 2016 12:04:24 +0200 Subject: DNS Caching Issue For community version In-Reply-To: References: <62a62ca7-39d6-ecb7-87a3-fd63030b3a88@gmail.com>, Message-ID: > Even adding the valid parameter the issue was not solved. And what is the issue actually? Just saying "DNS caching issue" and "problem" isn't really helpful. From nginx-forum at forum.nginx.org Thu May 12 10:11:36 2016 From: nginx-forum at forum.nginx.org (RT.Nat) Date: Thu, 12 May 2016 06:11:36 -0400 Subject: DNS Caching Issue For community version In-Reply-To: References: Message-ID: <893765ff38700b55744c2821f04511fb.NginxMailingListEnglish@forum.nginx.org> I wanted check whether the resolver solves the DNS in a dynamic manner when the ip addresses changes. So I add the given code after several findings and yet the resolving the ip address is not happening. server{ ........... resolver 8.8.8.8 valid=30s; resolver_timeout 10s; set $upstream "example.net"; location / { rewrite ^/(.*) /$1 break; proxy_pass https://$upstream:3000; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266862#msg-266862 From unixro at gmail.com Thu May 12 10:49:46 2016 From: unixro at gmail.com (Mihai Vintila) Date: Thu, 12 May 2016 13:49:46 +0300 Subject: DNS Caching Issue For community version In-Reply-To: <893765ff38700b55744c2821f04511fb.NginxMailingListEnglish@forum.nginx.org> References: <893765ff38700b55744c2821f04511fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <2ff6e4a6-9374-3b1f-d9b6-f42021f54977@gmail.com> Have you also checked that the DNS returns the correct value? As the valid option means that nginx will ask the DNS server again, but if the DNS replies with same old ip.. Also you might check using directly the hostname as it might be possible that there is a bug when using variables. Best regards, Vintila Mihai Alexandru On 5/12/2016 1:11 PM, RT.Nat wrote: > I wanted check whether the resolver solves the DNS in a dynamic manner when > the ip addresses changes. > > So I add the given code after several findings and yet the resolving the ip > address is not happening. > > server{ > ........... > > resolver 8.8.8.8 valid=30s; > resolver_timeout 10s; > set $upstream "example.net"; > > location / { > rewrite ^/(.*) /$1 break; > proxy_pass https://$upstream:3000; > > > } > > } > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266862#msg-266862 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 12 11:36:36 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 12 May 2016 14:36:36 +0300 Subject: DNS Caching Issue For community version In-Reply-To: <893765ff38700b55744c2821f04511fb.NginxMailingListEnglish@forum.nginx.org> References: <893765ff38700b55744c2821f04511fb.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160512113636.GR36620@mdounin.ru> Hello! On Thu, May 12, 2016 at 06:11:36AM -0400, RT.Nat wrote: > I wanted check whether the resolver solves the DNS in a dynamic manner when > the ip addresses changes. > > So I add the given code after several findings and yet the resolving the ip > address is not happening. > > server{ > ........... > > resolver 8.8.8.8 valid=30s; > resolver_timeout 10s; > set $upstream "example.net"; > > location / { > rewrite ^/(.*) /$1 break; > proxy_pass https://$upstream:3000; > > > } > > } Note if you have upstream "example.net" defined elsewhere, nginx will use it, and dynamic DNS resolution will not be used. Note well that upstream may be defined either explicitly, using upstream example.net { ... } or implicitly, using a proxy_pass to the given name: proxy_pass http://example.net; The latter sometimes confuses people trying to use proxy_pass with variables to trigger dynamic DNS resolution of upstream names. -- Maxim Dounin http://nginx.org/ From ahall at autodist.com Thu May 12 21:34:09 2016 From: ahall at autodist.com (Alex Hall) Date: Thu, 12 May 2016 17:34:09 -0400 Subject: Serving website with Apache, with Nginx as interface? Message-ID: Hello all, Here's what I'm trying to do. I have two sites, sd1.mysite.com and sd2.mysite.com. The fun part is that sd1 is a Flask app, served by Nginx. However, sd2 is OSTicket, which must be served by Apache, it seems. Of course, Apache and Nginx can't listen to port 80 at the same time, and as this is a subdomain on a local, Windows DNS, I can't make sd2.mysite.com point to myip:8080 or anything like that. Thus, my best option appears to be this: Nginx listens to all incoming traffic on 80. If the request is for anything to do with sd1, it handles it just like it does now. However, if the request is for sd2, Nginx somehow hands off the request to Apache, then returns what Apache gives it back to the user. I've heard that people use Apache and Nginx together, but I haven't found anyone who uses them to serve two subdomains, with Nginx as the "gateway" and handler of one subdomain, and Apache as the handler for the other subdomain. Is there any way to do this? Am I even making sense? Thanks for any ideas anyone has. -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sven at elite12.de Thu May 12 21:45:32 2016 From: sven at elite12.de (Sven Kirschbaum) Date: Thu, 12 May 2016 23:45:32 +0200 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: Hello, let Apache listen on a different port and create two server blocks in nginx, one for each site. Then configure nginx to proxy the requests to sd2 to apache. Just google "nginx reverse proxy" and you will find much information. Yours sincerely Sven Kirschbaum 2016-05-12 23:34 GMT+02:00 Alex Hall : > Hello all, > Here's what I'm trying to do. I have two sites, sd1.mysite.com and > sd2.mysite.com. The fun part is that sd1 is a Flask app, served by Nginx. > However, sd2 is OSTicket, which must be served by Apache, it seems. Of > course, Apache and Nginx can't listen to port 80 at the same time, and as > this is a subdomain on a local, Windows DNS, I can't make sd2.mysite.com > point to myip:8080 or anything like that. > > Thus, my best option appears to be this: Nginx listens to all incoming > traffic on 80. If the request is for anything to do with sd1, it handles it > just like it does now. However, if the request is for sd2, Nginx somehow > hands off the request to Apache, then returns what Apache gives it back to > the user. > > I've heard that people use Apache and Nginx together, but I haven't found > anyone who uses them to serve two subdomains, with Nginx as the "gateway" > and handler of one subdomain, and Apache as the handler for the other > subdomain. Is there any way to do this? Am I even making sense? Thanks for > any ideas anyone has. > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Thu May 12 21:56:33 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Fri, 13 May 2016 00:56:33 +0300 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: Hi, you can use vhost in Apache and configure proxy_pass in nginx configuration For apache2 somthing like that ServerName foo.bar DocumentRoot /home/sites/ Order deny,allow Allow from all ErrorLog /home/sites/logs/apache_error.log CustomLog /home/sites/logs/apache_access.log combined etc....... For nginx server { listen 80; server_name foo.bar; access_log /home/sites/logs/nginx_access.log; error_log /home/sites/logs/nginx_error.log; location / { proxy_pass http://backend1; etc..... } } 2016-05-13 0:34 GMT+03:00 Alex Hall : > Hello all, > Here's what I'm trying to do. I have two sites, sd1.mysite.com and > sd2.mysite.com. The fun part is that sd1 is a Flask app, served by Nginx. > However, sd2 is OSTicket, which must be served by Apache, it seems. Of > course, Apache and Nginx can't listen to port 80 at the same time, and as > this is a subdomain on a local, Windows DNS, I can't make sd2.mysite.com > point to myip:8080 or anything like that. > > Thus, my best option appears to be this: Nginx listens to all incoming > traffic on 80. If the request is for anything to do with sd1, it handles it > just like it does now. However, if the request is for sd2, Nginx somehow > hands off the request to Apache, then returns what Apache gives it back to > the user. > > I've heard that people use Apache and Nginx together, but I haven't found > anyone who uses them to serve two subdomains, with Nginx as the "gateway" > and handler of one subdomain, and Apache as the handler for the other > subdomain. Is there any way to do this? Am I even making sense? Thanks for > any ideas anyone has. > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Thu May 12 22:59:16 2016 From: ahall at autodist.com (Alex Hall) Date: Thu, 12 May 2016 18:59:16 -0400 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: Thanks! I followed you, until the proxy_pass. What is backend1, and where is it defined? I know it's something you made up, but how does it know about Apache, or Apache about it? > On May 12, 2016, at 17:56, Yuriy Medvedev wrote: > > Hi, you can use vhost in Apache and configure proxy_pass in nginx configuration > For apache2 somthing like that > > ServerName foo.bar > > DocumentRoot /home/sites/ > > Order deny,allow > Allow from all > > > ErrorLog /home/sites/logs/apache_error.log > CustomLog /home/sites/logs/apache_access.log combined > > etc....... > > For nginx > server { > listen 80; > server_name foo.bar; > > access_log /home/sites/logs/nginx_access.log; > error_log /home/sites/logs/nginx_error.log; > > location / { > proxy_pass http://backend1 ; > etc..... > } > } > > 2016-05-13 0:34 GMT+03:00 Alex Hall >: > Hello all, > Here's what I'm trying to do. I have two sites, sd1.mysite.com and sd2.mysite.com . The fun part is that sd1 is a Flask app, served by Nginx. However, sd2 is OSTicket, which must be served by Apache, it seems. Of course, Apache and Nginx can't listen to port 80 at the same time, and as this is a subdomain on a local, Windows DNS, I can't make sd2.mysite.com point to myip:8080 or anything like that. > > Thus, my best option appears to be this: Nginx listens to all incoming traffic on 80. If the request is for anything to do with sd1, it handles it just like it does now. However, if the request is for sd2, Nginx somehow hands off the request to Apache, then returns what Apache gives it back to the user. > > I've heard that people use Apache and Nginx together, but I haven't found anyone who uses them to serve two subdomains, with Nginx as the "gateway" and handler of one subdomain, and Apache as the handler for the other subdomain. Is there any way to do this? Am I even making sense? Thanks for any ideas anyone has. > > -- > Alex Hall > Automatic Distributors, IT department > ahall at autodist.com > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From medvedev.yp at gmail.com Fri May 13 05:20:36 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Fri, 13 May 2016 08:20:36 +0300 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: Sorry, backend1 its upstream in nginx configuration upstream backend1 { #ip of Apache back-end server 192.168.0.1:8080; } 2016-05-13 1:59 GMT+03:00 Alex Hall : > Thanks! I followed you, until the proxy_pass. What is backend1, and where > is it defined? I know it's something you made up, but how does it know > about Apache, or Apache about it? > > On May 12, 2016, at 17:56, Yuriy Medvedev wrote: > > Hi, you can use vhost in Apache and configure proxy_pass in nginx > configuration > For apache2 somthing like that > > ServerName foo.bar > > DocumentRoot /home/sites/ > > Order deny,allow > Allow from all > > > ErrorLog /home/sites/logs/apache_error.log > CustomLog /home/sites/logs/apache_access.log combined > > etc....... > > For nginx > server { > listen 80; > server_name foo.bar; > > access_log /home/sites/logs/nginx_access.log; > error_log /home/sites/logs/nginx_error.log; > > location / { > proxy_pass http://backend1; > etc..... > } > } > > 2016-05-13 0:34 GMT+03:00 Alex Hall : > >> Hello all, >> Here's what I'm trying to do. I have two sites, sd1.mysite.com and >> sd2.mysite.com. The fun part is that sd1 is a Flask app, served by >> Nginx. However, sd2 is OSTicket, which must be served by Apache, it seems. >> Of course, Apache and Nginx can't listen to port 80 at the same time, and >> as this is a subdomain on a local, Windows DNS, I can't make >> sd2.mysite.com point to myip:8080 or anything like that. >> >> Thus, my best option appears to be this: Nginx listens to all incoming >> traffic on 80. If the request is for anything to do with sd1, it handles it >> just like it does now. However, if the request is for sd2, Nginx somehow >> hands off the request to Apache, then returns what Apache gives it back to >> the user. >> >> I've heard that people use Apache and Nginx together, but I haven't found >> anyone who uses them to serve two subdomains, with Nginx as the "gateway" >> and handler of one subdomain, and Apache as the handler for the other >> subdomain. Is there any way to do this? Am I even making sense? Thanks for >> any ideas anyone has. >> >> -- >> Alex Hall >> Automatic Distributors, IT department >> ahall at autodist.com >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 13 06:13:05 2016 From: nginx-forum at forum.nginx.org (RT.Nat) Date: Fri, 13 May 2016 02:13:05 -0400 Subject: DNS Caching Issue For community version In-Reply-To: <2ff6e4a6-9374-3b1f-d9b6-f42021f54977@gmail.com> References: <2ff6e4a6-9374-3b1f-d9b6-f42021f54977@gmail.com> Message-ID: Not clear regarding, " Also you might check using directly the hostname as it might be possible that there is a bug when using variables." I tried adding the variable for resolving the dns but still the ip address does not changes. Is there any other way? or is there any bug in my script. resolver 8.8.8.8 valid=30s; resolver_timeout 10s; set $checkup "example.net"; location / { rewrite ^/(.*) /$1 break; proxy_pass https://$checkup:8080; } even adding the following codes instead of the above proxy pass doesnt work. proxy_pass https://example.net:8080$request_uri; proxy_pass https://example.net:8080; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266880#msg-266880 From i at trushkin.ru Fri May 13 08:28:27 2016 From: i at trushkin.ru (Lali Avlokhova) Date: Fri, 13 May 2016 11:28:27 +0300 Subject: Fw: new message Message-ID: <00000fc3e327$97cc104e$906b3e50$@trushkin.ru> Hello! You have a new message, please read Lali Avlokhova -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri May 13 08:30:03 2016 From: nginx-forum at forum.nginx.org (Sportmodus) Date: Fri, 13 May 2016 04:30:03 -0400 Subject: NGINX: Set/use requests with fully qualified domain name (= FQDN). In-Reply-To: <20160511222810.GQ9435@daoine.org> References: <20160511222810.GQ9435@daoine.org> Message-ID: <4f134ac6a755466ab4cf1dc187f93e02.NginxMailingListEnglish@forum.nginx.org> Thanks. I used Apache (behind NGINX) with ProxyRemote * http://proxy:8080 in conjunction with ProxyPass. Works! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266802,266883#msg-266883 From nginx-forum at forum.nginx.org Fri May 13 12:47:12 2016 From: nginx-forum at forum.nginx.org (duanhonghui) Date: Fri, 13 May 2016 08:47:12 -0400 Subject: nginx doesn't seem to log all accesses (some uwsgi accesses are missing) In-Reply-To: References: Message-ID: ??hi ,, did you got the answer?? i have the same problem Posted at Nginx Forum: https://forum.nginx.org/read.php?2,221737,266884#msg-266884 From ahall at autodist.com Fri May 13 15:31:58 2016 From: ahall at autodist.com (Alex Hall) Date: Fri, 13 May 2016 11:31:58 -0400 Subject: Using proxy_redirect correctly Message-ID: Hi all, My proxy seems to be working, at least so far. I browse to sd2.mysite.com and find the OSTicket installer. Since I've removed the OST site from Nginx, I know Apache is kicking it like it should be. The only problem now is that OST thinks its location is 127.0.0.1:8080, which is the internal redirection but not what I want it to use. I know I have to use proxy_redirect, but I don't quite understand how this directive works. I just tried: proxy_redirect 127.0.0.1:8080 sd2.mysite.com; but that didn't work. I find the documentation I've read thus far to be rather confusing. All I want to do is replace 127.0.0.1:8080 in any URL on this site with sd2.mysite.com. Do I need some kind of regular expression? To move this proxy somewhere else (currently it's in location / context)? Should my proxy_pass also be moved to a different context? -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Fri May 13 17:24:57 2016 From: ahall at autodist.com (Alex Hall) Date: Fri, 13 May 2016 13:24:57 -0400 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: As a quick update, setting proxy_set_header Host $host; seems to have gotten the URL working, but OSTicket is still giving me the same error I was getting when I was serving it with Nginx directly. Worse, the Apache2 access log shows none of my recent attempts to access the site, though it does show older ones. The Nginx access log shows all the accesses, though. It's as though the proxy weren't working properly at all. I have it set up in a location: upstream apache2Redirect { server 127.0.0.1:8080; } location / { proxy_set_header Host $host; proxy_pass http://apache2Redirect; } My understanding is that the / will match everything, from /index.html to /images/small/235.jpg. Is that not the case? Do I need to do something to my location block, by chance? On Fri, May 13, 2016 at 1:20 AM, Yuriy Medvedev wrote: > Sorry, backend1 its upstream in nginx configuration > upstream backend1 { > #ip of Apache back-end > server 192.168.0.1:8080; > } > > 2016-05-13 1:59 GMT+03:00 Alex Hall : > >> Thanks! I followed you, until the proxy_pass. What is backend1, and where >> is it defined? I know it's something you made up, but how does it know >> about Apache, or Apache about it? >> >> On May 12, 2016, at 17:56, Yuriy Medvedev wrote: >> >> Hi, you can use vhost in Apache and configure proxy_pass in nginx >> configuration >> For apache2 somthing like that >> >> ServerName foo.bar >> >> DocumentRoot /home/sites/ >> >> Order deny,allow >> Allow from all >> >> >> ErrorLog /home/sites/logs/apache_error.log >> CustomLog /home/sites/logs/apache_access.log combined >> >> etc....... >> >> For nginx >> server { >> listen 80; >> server_name foo.bar; >> >> access_log /home/sites/logs/nginx_access.log; >> error_log /home/sites/logs/nginx_error.log; >> >> location / { >> proxy_pass http://backend1; >> etc..... >> } >> } >> >> 2016-05-13 0:34 GMT+03:00 Alex Hall : >> >>> Hello all, >>> Here's what I'm trying to do. I have two sites, sd1.mysite.com and >>> sd2.mysite.com. The fun part is that sd1 is a Flask app, served by >>> Nginx. However, sd2 is OSTicket, which must be served by Apache, it seems. >>> Of course, Apache and Nginx can't listen to port 80 at the same time, and >>> as this is a subdomain on a local, Windows DNS, I can't make >>> sd2.mysite.com point to myip:8080 or anything like that. >>> >>> Thus, my best option appears to be this: Nginx listens to all incoming >>> traffic on 80. If the request is for anything to do with sd1, it handles it >>> just like it does now. However, if the request is for sd2, Nginx somehow >>> hands off the request to Apache, then returns what Apache gives it back to >>> the user. >>> >>> I've heard that people use Apache and Nginx together, but I haven't >>> found anyone who uses them to serve two subdomains, with Nginx as the >>> "gateway" and handler of one subdomain, and Apache as the handler for the >>> other subdomain. Is there any way to do this? Am I even making sense? >>> Thanks for any ideas anyone has. >>> >>> -- >>> Alex Hall >>> Automatic Distributors, IT department >>> ahall at autodist.com >>> >>> _______________________________________________ >>> nginx mailing list >>> nginx at nginx.org >>> http://mailman.nginx.org/mailman/listinfo/nginx >>> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> >> >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at ruby-forum.com Sat May 14 00:04:39 2016 From: lists at ruby-forum.com (Zin Man) Date: Sat, 14 May 2016 02:04:39 +0200 Subject: nginx reverse proxy with rtmp Message-ID: <75eda2db1a378d7df27c3a497b9b869f@ruby-forum.com> I am trying to use nginx reverse proxy to connect to a nginx rtmp server 2 servers a and b a = nginx with rtmp ( videos loacated here ) b = nginx reverse proxy the reverse proxy is working good with http to stream my videos from server "a" but i am not able to use it after installing rtmp on server "a" this is my old server b config server { listen 80; server_name server_a_name; location / { proxy_pass http://ip of server a/; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; access_log off; log_not_found off; = } } so after installing nginx rtmp on server a i have tried this config on server b rtmp { server { listen 1935; application vod { pull rtmp://ip of server a:1935; } } } also tried pull rtmp://ip of server a:1935/vod/; and this is the config on server a rtmp { server { listen 1935; notify_method get; on_play http://127.0.0.1/vod_handler; application vod { play /home/files/public_html/cgi-bin/uploads/; } } } thanks -- Posted via http://www.ruby-forum.com/. From medvedev.yp at gmail.com Sat May 14 08:38:30 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Sat, 14 May 2016 11:38:30 +0300 Subject: nginx reverse proxy with rtmp In-Reply-To: <75eda2db1a378d7df27c3a497b9b869f@ruby-forum.com> References: <75eda2db1a378d7df27c3a497b9b869f@ruby-forum.com> Message-ID: Rtmp it's tcp and you need use nginx as tcp proxy like haproxy 14 ??? 2016 ?. 3:04 ???????????? "Zin Man" ???????: > I am trying to use nginx reverse proxy to connect to a nginx rtmp server > > 2 servers a and b > > a = nginx with rtmp ( videos loacated here ) > > b = nginx reverse proxy > > the reverse proxy is working good with http to stream my videos from > server "a" > > but i am not able to use it after installing rtmp on server "a" > > > this is my old server b config > > server { > listen 80; > server_name server_a_name; > location / { > proxy_pass http://ip of server a/; > > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > access_log off; > log_not_found off; > = } > } > > so after installing nginx rtmp on server a i have tried this config on > server b > > rtmp { > server { > listen 1935; > application vod { > > pull rtmp://ip of server a:1935; > } > } > } > also tried pull rtmp://ip of server a:1935/vod/; > > and this is the config on server a > > rtmp > { > server > { > listen 1935; > notify_method get; > on_play http://127.0.0.1/vod_handler; > application vod > { > play /home/files/public_html/cgi-bin/uploads/; > } > } > } > thanks > > -- > Posted via http://www.ruby-forum.com/. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sat May 14 09:19:40 2016 From: francis at daoine.org (Francis Daly) Date: Sat, 14 May 2016 10:19:40 +0100 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: <20160514091940.GA3477@daoine.org> On Fri, May 13, 2016 at 01:24:57PM -0400, Alex Hall wrote: Hi there, > It's as though the proxy weren't working properly at all. > I have it set up in a location: > > upstream apache2Redirect { > server 127.0.0.1:8080; > } > > location / { > proxy_set_header Host $host; > proxy_pass http://apache2Redirect; > } > > My understanding is that the / will match everything, from /index.html to > /images/small/235.jpg. Is that not the case? Do I need to do something to > my location block, by chance? If the "location" you show above is the entire content of your server{} block, then all requests that get to nginx should be handled in it. If you have more config that you are not showing, then possibly that extra config is interfering with what you want to do. The best chance of someone being able to help, is if you can include very specific details about what you do, what you see, and what you expect to see instead. If you use the "curl" command-line tool instead of a normal browser, you can make one request and see the full response. If you know what response you expect, you can compare it to the response that you actually get. curl -v http://ngninx-server/OSTicket/ (or whatever url you have set things up at). Without knowing what you do want to see, I'm pretty sure that you do not want to see "127.0.0.1" or "8080" anywhere in the response. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sat May 14 09:55:02 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Sat, 14 May 2016 05:55:02 -0400 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: Message-ID: Alex Hall Wrote: ------------------------------------------------------- > upstream apache2Redirect { > server 127.0.0.1:8080; > } > > location / { > proxy_set_header Host $host; > proxy_pass http://apache2Redirect; > } Browser(get localhost:8080) -> osticket (return responses with localhost:8080) Browser(get localhost:80) -> nginx proxy_pass localhost:8080 -> osticket (return responses with localhost:8080) Browser attempts 8080 which is not the nginx proxy. Ea. sometimes you need to tell the backend (osticket) what it's frontend address/port is. This is a typical issue with tomcat applications, where you have to tell it is running on port 443 eventhough it is listening on port 8443 because nginx is setting in between handling port 443 proxying to 8443. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266874,266895#msg-266895 From ahall at autodist.com Sat May 14 11:32:38 2016 From: ahall at autodist.com (Alex Hall) Date: Sat, 14 May 2016 07:32:38 -0400 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: <20160514091940.GA3477@daoine.org> References: <20160514091940.GA3477@daoine.org> Message-ID: <72CA41F5-1075-4472-B906-4F8007484DED@autodist.com> > On May 14, 2016, at 05:19, Francis Daly wrote: > > On Fri, May 13, 2016 at 01:24:57PM -0400, Alex Hall wrote: > > Hi there, > >> It's as though the proxy weren't working properly at all. >> I have it set up in a location: >> >> upstream apache2Redirect { >> server 127.0.0.1:8080; >> } >> >> location / { >> proxy_set_header Host $host; >> proxy_pass http://apache2Redirect; >> } >> >> My understanding is that the / will match everything, from /index.html to >> /images/small/235.jpg. Is that not the case? Do I need to do something to >> my location block, by chance? > > If the "location" you show above is the entire content of your server{} > block, then all requests that get to nginx should be handled in it. > > If you have more config that you are not showing, then possibly that > extra config is interfering with what you want to do. > Sorry I should have said. Yes, that's all there is to my config file. I wanted every request to go to Apache, including any subdirectories. > > The best chance of someone being able to help, is if you can include > very specific details about what you do, what you see, and what you > expect to see instead. The problem is that the error I'm seeing is in OSTicket. All I can say is that the OST forums aren't any help, that I don't see the error on Apache under Windows, and that I do see it under this configuration. It's the exact same error I saw when serving OST with Nginx directly, which is why I think the proxy isn't working correctly. Plus, I don't see the access to the OST pages in the Apache access log after 11:14, despite trying it all day yesterday. Nginx registers them, but not Apache. Yet, if I stop Apache, I get a 502 when trying to pull up OST. > > If you use the "curl" command-line tool instead of a normal browser, you > can make one request and see the full response. If you know what response > you expect, you can compare it to the response that you actually get. > > > curl -v http://ngninx-server/OSTicket/ > > (or whatever url you have set things up at). > > Without knowing what you do want to see, I'm pretty sure that you do > not want to see "127.0.0.1" or "8080" anywhere in the response. Curl is a good idea. I'll try that Monday when I'm back in the office (this is an intranet site, so I can't test it from home, though I can ssh into the server). > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at ruby-forum.com Sat May 14 15:22:53 2016 From: lists at ruby-forum.com (Zin Man) Date: Sat, 14 May 2016 17:22:53 +0200 Subject: nginx reverse proxy with rtmp In-Reply-To: References: <75eda2db1a378d7df27c3a497b9b869f@ruby-forum.com> Message-ID: Yuriy Medvedev wrote in post #1183456: > Rtmp it's tcp and you need use nginx as tcp proxy like haproxy > 14 ??? 2016 ?. 3:04 ???????????? "Zin Man" > ???????: can you please give me an example config -- Posted via http://www.ruby-forum.com/. From nginx-forum at forum.nginx.org Sun May 15 19:47:37 2016 From: nginx-forum at forum.nginx.org (tom.b) Date: Sun, 15 May 2016 15:47:37 -0400 Subject: limit_req_zone values Message-ID: <2df8555b4f3b169eed3d887297d0ae3b.NginxMailingListEnglish@forum.nginx.org> Greetings, I have the limit_req_zone working great in testing environment with the following values limit_req_zone $server_name zone=perserver:10m rate=5r/s; limit_req zone=perserver burst=5; The site has up to 750 simultaneous users at peak times. What values are suitable for such traffic levels ?, what are values are people using with similar traffic levels ? Cheers, Tom. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266908,266908#msg-266908 From daniel.biazus at azion.com Sun May 15 21:01:55 2016 From: daniel.biazus at azion.com (Daniel Biazus) Date: Sun, 15 May 2016 18:01:55 -0300 Subject: DNS Caching Issue For community version In-Reply-To: References: <2ff6e4a6-9374-3b1f-d9b6-f42021f54977@gmail.com> Message-ID: Maybe You should try this module: https://github.com/GUI/nginx-upstream-dynamic-servers Regards, Biazus On Fri, May 13, 2016 at 3:13 AM, RT.Nat wrote: > Not clear regarding, " Also you might check using directly the hostname as > it might be possible that there is a bug when using variables." > > I tried adding the variable for resolving the dns but still the ip address > does not changes. Is there any other way? or is there any bug in my script. > > resolver 8.8.8.8 valid=30s; > resolver_timeout 10s; > set $checkup "example.net"; > > location / { > rewrite ^/(.*) /$1 break; > proxy_pass https://$checkup:8080; > } > > even adding the following codes instead of the above proxy pass doesnt > work. > > proxy_pass https://example.net:8080$request_uri; > proxy_pass https://example.net:8080; > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,266857,266880#msg-266880 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Daniel Biazus, R&DAZION | Deliver. Accelerate. Protect.Office: +55 51 3012 3005 <%2B55%2051%203012%203005> | Mobile: +55 51 8227 9032 * Quaisquer informa??es contidas neste e-mail e anexos podem ser confidenciais e privilegiadas, protegidas por sigilo legal. Qualquer forma de utiliza??o deste documento depende de autoriza??o do emissor, sujeito as penalidades cab?veis. Any information in this e-mail and attachments may be confidential and privileged, protected by legal confidentiality. The use of this document require authorization by the issuer, subject to penalties. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 16 04:27:58 2016 From: nginx-forum at forum.nginx.org (RT.Nat) Date: Mon, 16 May 2016 00:27:58 -0400 Subject: DNS Caching Issue For community version In-Reply-To: References: Message-ID: <54d99f45ca05508699287280d4f91d20.NginxMailingListEnglish@forum.nginx.org> I cannot install the above module. /Downloads/nginx-upstream-dynamic-servers-master$ make install make: *** No rule to make target `install'. Stop. May i know is there any problem in my machine( any required prerequistes). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266857,266913#msg-266913 From nginx-forum at forum.nginx.org Mon May 16 04:50:47 2016 From: nginx-forum at forum.nginx.org (domsday2016) Date: Mon, 16 May 2016 00:50:47 -0400 Subject: ngx_http_mp4_module. Help me !!!!!!!!!! Message-ID: <82a70ec2792df89e86346f36a22cf418.NginxMailingListEnglish@forum.nginx.org> I compiling nginx for streaming videos on my web site by these options : nginx version: nginx/1.9.15 built by gcc 4.4.7 20120313 (Red Hat 4.4.7-16) (GCC) built with OpenSSL 1.0.1e-fips 11 Feb 2013 TLS SNI support enabled configure arguments: --user=root --group=root --prefix=/usr/local/nginx --sbin-path=/usr/local/nginx/nginx --conf-path=/usr/local/nginx/nginx.conf --error-log-path=/usr/local/nginx/logs/error.log --http-log-path=/usr/local/nginx/logs/access.log --http-client-body-temp-path=/usr/local/nginx/tmp/client_body --http-proxy-temp-path=/usr/local/nginx/tmp/proxy --http-fastcgi-temp-path=/usr/local/nginx/tmp/fastcgi --pid-path=/usr/local/nginx/nginx.pid --lock-path=/var/lock/subsys/nginx --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gzip_static_module --with-http_stub_status_module --with-http_perl_module --with-mail --with-mail_ssl_module --with-pcre --with-pcre=/root/pcre-8.38 --with-zlib=/root/zlib-1.2.8 --add-module=/root/Nginx-limit-traffic-rate-module-master/ --add-module=/home/ngx_http_hls_module-master and this is my configuration on Nginx for pseudo streaming on nginx.conf : location /video/ { autoindex on; autoindex_exact_size off; limit_conn addr 5; limit_rate_after 1m; limit_rate 20k; mp4; mp4_buffer_size 100k; mp4_max_buffer_size 5M; } in file nginx.conf I add some lines: http { include mime.types; include /usr/local/nginx/site-enable/*.conf; limit_conn_zone $binary_remote_addr zone=addr:32m; I want to limit the download speed video and limit buffer but when I run the video, the command will not work http://www.example.com/video/1111.mp4?start=100 only command mp4 work. Please help me fix this, I took a whole week to research it Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266914,266914#msg-266914 From nginx-forum at forum.nginx.org Mon May 16 10:06:21 2016 From: nginx-forum at forum.nginx.org (mcofko) Date: Mon, 16 May 2016 06:06:21 -0400 Subject: Gzip issue with Safari Message-ID: <8edc9805550e45486dbafa1b18b940be.NginxMailingListEnglish@forum.nginx.org> Hi all, we have successfully enable gziping on the server side and it works perfectly with Chrome and FF (it auto compress the files that we want). We're using compressed assets of type: .dds and .pvr. MIME-types for those files are: application/octet-stream. But the problem appears in Safari which doesn't accept .gz file type extension, so we are left out with a huge improvement regarding file size/loading time. Those are our current settings for enabling gzip: gzip on; gzip_proxied any; gzip_types image/png application/javascript application/octet-stream audio/ogg text/xml image/jpeg; gzip_vary on; gzip_comp_level 6; gzip_min_length 1100; gzip_static on; I noticed that Apache server enables something like this: AddEncoding gzip .jgz AddType text/javascript .jgz And this little change enables that Safari acknowledge gzip type and it also uploads it. So the question is what are the possibilities on Nginx? Can we somehow add support for additional extensions? I've also read something about gzip_static setting which look for pre-compressed files, but it looks just for files with .gz extensions. Do you know any solutions? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266915,266915#msg-266915 From nginx-forum at forum.nginx.org Mon May 16 10:23:19 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Mon, 16 May 2016 06:23:19 -0400 Subject: Gzip issue with Safari In-Reply-To: <8edc9805550e45486dbafa1b18b940be.NginxMailingListEnglish@forum.nginx.org> References: <8edc9805550e45486dbafa1b18b940be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <41a53505ca811a0920589c7782be8422.NginxMailingListEnglish@forum.nginx.org> Have you had a look yet at /conf/mime.types Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266915,266916#msg-266916 From wandenberg at gmail.com Mon May 16 10:27:03 2016 From: wandenberg at gmail.com (Wandenberg Peixoto) Date: Mon, 16 May 2016 07:27:03 -0300 Subject: DNS Caching Issue For community version In-Reply-To: <54d99f45ca05508699287280d4f91d20.NginxMailingListEnglish@forum.nginx.org> References: <54d99f45ca05508699287280d4f91d20.NginxMailingListEnglish@forum.nginx.org> Message-ID: You have to configure your nginx to use it like any other module. For instance, cd /Downloads/nginx ./configure --add-module=/Downloads/nginx-upstream-dynamic-servers-master make make install On May 16, 2016 01:28, "RT.Nat" wrote: > I cannot install the above module. > > /Downloads/nginx-upstream-dynamic-servers-master$ make install > make: *** No rule to make target `install'. Stop. > > May i know is there any problem in my machine( any required prerequistes). > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,266857,266913#msg-266913 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 16 10:28:41 2016 From: nginx-forum at forum.nginx.org (mcofko) Date: Mon, 16 May 2016 06:28:41 -0400 Subject: Gzip issue with Safari In-Reply-To: <41a53505ca811a0920589c7782be8422.NginxMailingListEnglish@forum.nginx.org> References: <8edc9805550e45486dbafa1b18b940be.NginxMailingListEnglish@forum.nginx.org> <41a53505ca811a0920589c7782be8422.NginxMailingListEnglish@forum.nginx.org> Message-ID: Could you be a little more specific? I have the right configuration on nginx regarding pvr and dds types, it's: application/octet-stream...how could I modify the gzip extension to something else than .gz? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266915,266918#msg-266918 From ahall at autodist.com Mon May 16 12:50:44 2016 From: ahall at autodist.com (Alex Hall) Date: Mon, 16 May 2016 08:50:44 -0400 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: <72CA41F5-1075-4472-B906-4F8007484DED@autodist.com> References: <20160514091940.GA3477@daoine.org> <72CA41F5-1075-4472-B906-4F8007484DED@autodist.com> Message-ID: Hi all, Well, it seems to be working now, and I'm thoroughly embarrassed about it. The Nginx/Apache setup is fine, and has been, it seems. The OST error was rather cryptic, but once I finally found where in the OST code it was being generated, I discovered that it was likely a database failure. I re-granted privileges to my OST user, and suddenly the installation completed. Thus far, everything is running normally. I guess the lesson is to always check your DB user privileges, and hope OST puts in better error messages. Thanks for the help, everyone; this taught me a lot, about Nginx configuration, upstream contexts, and so on. Even if the final answer was my own mistake in privilege settings, I couldn't have done the rest of the setup without the help of the list. I'll likely be back with more questions, but for now, everything seems stable and okay. On Sat, May 14, 2016 at 7:32 AM, Alex Hall wrote: > > > On May 14, 2016, at 05:19, Francis Daly wrote: > > > > On Fri, May 13, 2016 at 01:24:57PM -0400, Alex Hall wrote: > > > > Hi there, > > > >> It's as though the proxy weren't working properly at all. > >> I have it set up in a location: > >> > >> upstream apache2Redirect { > >> server 127.0.0.1:8080; > >> } > >> > >> location / { > >> proxy_set_header Host $host; > >> proxy_pass http://apache2Redirect; > >> } > >> > >> My understanding is that the / will match everything, from /index.html > to > >> /images/small/235.jpg. Is that not the case? Do I need to do something > to > >> my location block, by chance? > > > > If the "location" you show above is the entire content of your server{} > > block, then all requests that get to nginx should be handled in it. > > > > If you have more config that you are not showing, then possibly that > > extra config is interfering with what you want to do. > > > Sorry I should have said. Yes, that's all there is to my config file. I > wanted every request to go to Apache, including any subdirectories. > > > > The best chance of someone being able to help, is if you can include > > very specific details about what you do, what you see, and what you > > expect to see instead. > > The problem is that the error I'm seeing is in OSTicket. All I can say is > that the OST forums aren't any help, that I don't see the error on Apache > under Windows, and that I do see it under this configuration. It's the > exact same error I saw when serving OST with Nginx directly, which is why I > think the proxy isn't working correctly. Plus, I don't see the access to > the OST pages in the Apache access log after 11:14, despite trying it all > day yesterday. Nginx registers them, but not Apache. Yet, if I stop Apache, > I get a 502 when trying to pull up OST. > > > > If you use the "curl" command-line tool instead of a normal browser, you > > can make one request and see the full response. If you know what response > > you expect, you can compare it to the response that you actually get. > > > > > > curl -v http://ngninx-server/OSTicket/ > > > > (or whatever url you have set things up at). > > > > Without knowing what you do want to see, I'm pretty sure that you do > > not want to see "127.0.0.1" or "8080" anywhere in the response. > > Curl is a good idea. I'll try that Monday when I'm back in the office > (this is an intranet site, so I can't test it from home, though I can ssh > into the server). > > > > Good luck with it, > > > > f > > -- > > Francis Daly francis at daoine.org > > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From ahall at autodist.com Mon May 16 21:00:06 2016 From: ahall at autodist.com (Alex Hall) Date: Mon, 16 May 2016 17:00:06 -0400 Subject: Fully hiding localhost:8080 with Nginx as reverse proxy? Message-ID: Hello list, As mentioned earlier, Nginx and Apache are currently playing nicely together, with Apache handling all my OSTicket needs. However, I'm now trying to send mail from OSTicket, and while OST claims success, I'm not seeing any messages coming to my account. I was doing some searching and found that this can happen if the email appears to come from localhost, or some other invalid domain. Given my setup, I suspect that this is exactly the problem. What can I do to have responses from Apache appear to come from the domain's IP, rather than 127.0.0.1:8080? Thanks! -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon May 16 23:05:08 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 17 May 2016 00:05:08 +0100 Subject: DNS Caching Issue For community version In-Reply-To: References: <2ff6e4a6-9374-3b1f-d9b6-f42021f54977@gmail.com> Message-ID: <20160516230508.GC3477@daoine.org> On Fri, May 13, 2016 at 02:13:05AM -0400, RT.Nat wrote: Hi there, why do you think that this does not work? As in: what request do you make that give you an unexpected response? And can you also show the "dig" command that shows that the nominated dns server has changed the resolution address that it provides? > I tried adding the variable for resolving the dns but still the ip address > does not changes. Is there any other way? or is there any bug in my script. When I use: == http { server { listen 8080; resolver 8.8.8.8 valid=20s; set $up "www.example.net"; location / { proxy_pass http://$up; } } } == I can do: date; while :; do curl http://127.0.0.1:8080/; sleep 3; done and I can also do tcpdump -nn port 53 and the first command shows the remote web server content every few seconds, while the second command show that a new dns request is made more than 20 seconds after the previous one: === 21:31:00.572450 IP 192.168.224.128.52862 > 8.8.8.8.53: 12516+ A? www.example.net. (33) 21:31:00.631578 IP 8.8.8.8.53 > 192.168.224.128.52862: 12516 1/0/0 A 93.184.216.34 (49) 21:31:23.829429 IP 192.168.224.128.52862 > 8.8.8.8.53: 5096+ A? www.example.net. (33) 21:31:23.873474 IP 8.8.8.8.53 > 192.168.224.128.52862: 5096 1/0/0 A 93.184.216.34 (49) 21:31:47.135984 IP 192.168.224.128.52862 > 8.8.8.8.53: 37996+ A? www.example.net. (33) 21:31:47.165754 IP 8.8.8.8.53 > 192.168.224.128.52862: 37996 1/0/0 A 93.184.216.34 (49) === Now, I don't control the google name server, and I can't make www.example.net get an updated address at will; but the above does seem to show that the nginx resolver is making a fresh dns request when it is supposed to. Do you see something else in your test? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon May 16 23:23:32 2016 From: nginx-forum at forum.nginx.org (fengli) Date: Mon, 16 May 2016 19:23:32 -0400 Subject: Is there a way to send back HTTP2 trailers? Message-ID: <135ce288f09b4678ca41127d9dbe6a54.NginxMailingListEnglish@forum.nginx.org> Is there a way to send back HTTP2 trailers? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266933,266933#msg-266933 From nginx-forum at forum.nginx.org Tue May 17 05:06:25 2016 From: nginx-forum at forum.nginx.org (davidjb) Date: Tue, 17 May 2016 01:06:25 -0400 Subject: Module: Configuring upstream params (eg fastcgi_param) per request Message-ID: <9a6fbba09081e44ecf72c31f21f9d9a4.NginxMailingListEnglish@forum.nginx.org> I'm working on an authentication module for nginx, namely the Shibboleth auth module (https://github.com/nginx-shib/nginx-http-shibboleth). This module is based off the core nginx auth_request module (https://github.com/nginx/nginx/blob/master/src/http/modules/ngx_http_auth_request_module.c). The original module allows a sub-request to determine whether to grant or deny access to a nginx location; the Shibboleth module works in the same way, but copies specific headers (in the format "Variable-[Field-name]") from the auth sub-response into the original parent request so that they are sent to the upstream application. As it stands, the Shibboleth module can copy these variables automatically as headers, by iterating across all `headers_out` in the auth sub-response, testing for the `Variable-` string prefix, and then copying the values into the parent request's headers_in (see https://github.com/nginx-shib/nginx-http-shibboleth/blob/development/ngx_http_shibboleth_module.c#L299). This works well but care needs to be exercised to avoid spoofing. What I'd like to do is allow the same automated copying of `Variable-[Field-name]` headers from the sub-response's `headers_out` into relevant environment parameters (eg fastcgi_param, uwsgi_param etc) for upstreams that support this. I can achieve the desired result with manual use of `shib_request_set` (identical to auth_request_set, which sets nginx variables from the auth request response), like so: location / { shib_request_set $shib_auth_type $upstream_http_variable_auth_type; fastcgi_param Auth-Type $shib_auth_type; fastcgi_pass localhost:8000; } which sets `$shib_auth_type` to the value of header `Variable-Auth-Type`, and then sets the FastCGI param `Auth-Type` to the given value. The drawback is that this requires manual configuration for all potential `Variable-` prefixed headers (dozens or more are possible at times), and also different directives for each type of upstream (fastcgi_param, uwsgi_param, scgi_param etc). So, is it possible to set an upstream's parameters dynamically from my module's request handler (eg in the ngx_http_auth_request_handler function) or another part of the module (eg a filter)? Looking at the fastcgi, scgi and uwsgi modules in nginx, they have different (but similar) implementations and upstreams such as proxy_pass don't support environment parameters. So if this is possible, I'd envisage that my module would need to be aware of how the different upstreams' params are configured. My manual config solution might already be best, but I wanted to ask the question all the same. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266934,266934#msg-266934 From nginx-forum at forum.nginx.org Tue May 17 08:53:15 2016 From: nginx-forum at forum.nginx.org (gladston3) Date: Tue, 17 May 2016 04:53:15 -0400 Subject: reverse proxying exchange with rpc/mapi over http Message-ID: <04520b8dc5046cf63e2f4819864c7636.NginxMailingListEnglish@forum.nginx.org> Hello, is it possible to make a reverse proxy for exchange with all features working (activesync, rpc over http, mapi over http, etc...) with the free version of nginx. Or is this only possible with nginx plus? I do not need any of the (advanced) loadbalancing features offered by the plus version, though. The only thing I need is the proxy capability. Thank you very much in advance gladston3 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266936,266936#msg-266936 From nginx-forum at forum.nginx.org Tue May 17 11:37:14 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 17 May 2016 07:37:14 -0400 Subject: reverse proxying exchange with rpc/mapi over http In-Reply-To: <04520b8dc5046cf63e2f4819864c7636.NginxMailingListEnglish@forum.nginx.org> References: <04520b8dc5046cf63e2f4819864c7636.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5fa8afa06c945b41ea3311f3c7a0d9aa.NginxMailingListEnglish@forum.nginx.org> Sure, just make sure when its TCP you use stream {}, the rest can use http {} Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266936,266938#msg-266938 From nginx-forum at forum.nginx.org Tue May 17 12:39:00 2016 From: nginx-forum at forum.nginx.org (reaper) Date: Tue, 17 May 2016 08:39:00 -0400 Subject: proxy_cache is not working Message-ID: <3c05759754612955f76b9467734f011e.NginxMailingListEnglish@forum.nginx.org> nginx is not caching anything. Every request that's supposed to be cached has "http cacheable: 0" in debug log. Test request is a static page with headers like those < HTTP/1.1 200 OK < Date: Tue, 17 May 2016 12:23:36 GMT < Server: Apache/2.2.15 (CentOS) < X-Powered-By: PHP/5.5.32 < Content-Length: 5 < Connection: close < Content-Type: text/html; charset=UTF-8 Can you please suggest anything? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266939,266939#msg-266939 From vbart at nginx.com Tue May 17 12:44:36 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Tue, 17 May 2016 15:44:36 +0300 Subject: Is there a way to send back HTTP2 trailers? In-Reply-To: <135ce288f09b4678ca41127d9dbe6a54.NginxMailingListEnglish@forum.nginx.org> References: <135ce288f09b4678ca41127d9dbe6a54.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1926439.7GAGGd7CWy@vbart-workstation> On Monday 16 May 2016 19:23:32 fengli wrote: > Is there a way to send back HTTP2 trailers? > [..] No. The trailer part isn't supported. wbr, Valentin V. Bartenev From mdounin at mdounin.ru Tue May 17 12:49:27 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 May 2016 15:49:27 +0300 Subject: proxy_cache is not working In-Reply-To: <3c05759754612955f76b9467734f011e.NginxMailingListEnglish@forum.nginx.org> References: <3c05759754612955f76b9467734f011e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160517124927.GC36620@mdounin.ru> Hello! On Tue, May 17, 2016 at 08:39:00AM -0400, reaper wrote: > nginx is not caching anything. Every request that's supposed to be cached > has "http cacheable: 0" in debug log. > > Test request is a static page with headers like those > > < HTTP/1.1 200 OK > < Date: Tue, 17 May 2016 12:23:36 GMT > < Server: Apache/2.2.15 (CentOS) > < X-Powered-By: PHP/5.5.32 > < Content-Length: 5 > < Connection: close > < Content-Type: text/html; charset=UTF-8 > > Can you please suggest anything? You need to configure proxy_cache_valid for nginx to cache such a response, see http://nginx.org/r/proxy_cache_valid. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue May 17 13:11:00 2016 From: nginx-forum at forum.nginx.org (reaper) Date: Tue, 17 May 2016 09:11:00 -0400 Subject: proxy_cache is not working In-Reply-To: <20160517124927.GC36620@mdounin.ru> References: <20160517124927.GC36620@mdounin.ru> Message-ID: <79c6a1f539e353568a06211f825e05fb.NginxMailingListEnglish@forum.nginx.org> It was already set but apparently wasn't being used. location /static/ { try_files $uri @apache-cache; proxy_cache_valid 5m; } location @apache-cache { ... proxy_ignore_headers Set-Cookie Expires Cache-Control; proxy_hide_header Set-Cookie; internal; } Moved this directive to second location and now it's all fine. Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266939,266942#msg-266942 From nginx-forum at forum.nginx.org Tue May 17 14:01:15 2016 From: nginx-forum at forum.nginx.org (kmg) Date: Tue, 17 May 2016 10:01:15 -0400 Subject: Access and Error Log file for Nginx stream server configuration Message-ID: Seems access_log and error_log options are not available in stream server section. If its available, can you please share some example config for me ?. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266943,266943#msg-266943 From nginx-forum at forum.nginx.org Tue May 17 14:05:53 2016 From: nginx-forum at forum.nginx.org (neoelit) Date: Tue, 17 May 2016 10:05:53 -0400 Subject: Weak ETags and on-the-fly gzipping In-Reply-To: References: Message-ID: <0e189e0d0d6a7e3af52c4a7d1e6af5cc.NginxMailingListEnglish@forum.nginx.org> Is there anyway i can override this from nginx config file? I need weak etag to be untouched by nginx :( Posted at Nginx Forum: https://forum.nginx.org/read.php?2,240120,266944#msg-266944 From mdounin at mdounin.ru Tue May 17 15:00:19 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 17 May 2016 18:00:19 +0300 Subject: Weak ETags and on-the-fly gzipping In-Reply-To: <0e189e0d0d6a7e3af52c4a7d1e6af5cc.NginxMailingListEnglish@forum.nginx.org> References: <0e189e0d0d6a7e3af52c4a7d1e6af5cc.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160517150019.GE36620@mdounin.ru> Hello! On Tue, May 17, 2016 at 10:05:53AM -0400, neoelit wrote: > Is there anyway i can override this from nginx config file? > I need weak etag to be untouched by nginx :( Weak ETags are supported and used since nginx 1.7.3 as released on 08 Jul 2014, quote from http://nginx.org/en/CHANGES: *) Feature: weak entity tags are now preserved on response modifications, and strong ones are changed to weak. You are writing to an ancient thread from 2013. If you are still using an older version, it should be a good idea to upgrade. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Tue May 17 15:45:49 2016 From: nginx-forum at forum.nginx.org (itpp2012) Date: Tue, 17 May 2016 11:45:49 -0400 Subject: Access and Error Log file for Nginx stream server configuration In-Reply-To: References: Message-ID: <29ebf0a199ef7b8281582b74b7f2bd36.NginxMailingListEnglish@forum.nginx.org> stream { error_log logs/stream_error.log; or error_log logs/stream_error.log debug; ........ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266943,266947#msg-266947 From francis at daoine.org Tue May 17 17:05:01 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 17 May 2016 18:05:01 +0100 Subject: limit_req_zone values In-Reply-To: <2df8555b4f3b169eed3d887297d0ae3b.NginxMailingListEnglish@forum.nginx.org> References: <2df8555b4f3b169eed3d887297d0ae3b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160517170501.GD3477@daoine.org> On Sun, May 15, 2016 at 03:47:37PM -0400, tom.b wrote: Hi there, > limit_req_zone $server_name zone=perserver:10m rate=5r/s; > limit_req zone=perserver burst=5; limit_req is to limit the number of requests that your nginx will handle at a time. Why do you want to limit the number of requests? If it is "to stop an upstream service from failing", then set it to just below what the upstream can handle. Note that your particular limit_req_zone is effectively one counter per server{} block -- so it is probably an overall limit on this site. > The site has up to 750 simultaneous users at peak times. How many requests, or requests per second, corresponds to one user, on this site? f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 17 17:24:27 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 17 May 2016 18:24:27 +0100 Subject: Gzip issue with Safari In-Reply-To: <8edc9805550e45486dbafa1b18b940be.NginxMailingListEnglish@forum.nginx.org> References: <8edc9805550e45486dbafa1b18b940be.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160517172427.GE3477@daoine.org> On Mon, May 16, 2016 at 06:06:21AM -0400, mcofko wrote: Hi there, I confess I am not at all sure what it is you are asking about. If it is clear to others who can help you, then there is no need for you to do anything; but if it is generally unclear, perhaps you could describe again what you do, what you see, and what you want to see instead? > we have successfully enable gziping on the server side and it works > perfectly with Chrome and FF (it auto compress the files that we want). This seems to say that in FF you request /one.dds, and your nginx returns a gzip-compressed version on the file one.dds. (And nginx compresses it afresh on each request.) > We're using compressed assets of type: .dds and .pvr. *This* seems to say that in FF you request /one.dds, and your nginx returns the contents of the pre-compressed file one.dds.gz. I suspect that I am misunderstanding something. > But the problem appears in Safari which > doesn't accept .gz file type extension, And *this* bit, I'm not sure what it is intended to mean. In Safari you request /one.dds, and nginx returns something other than the content of the file one.dds? Or it returns the content, but Safari does not like it for some reason? > gzip on; > gzip_proxied any; > gzip_types image/png application/javascript application/octet-stream > audio/ogg text/xml image/jpeg; > gzip_vary on; > gzip_comp_level 6; > gzip_min_length 1100; That all says "sometimes, compress the response before sending it to the client". > gzip_static on; That says "if possible, send a pre-compressed file instead of the uncompressed file". > I noticed that Apache server enables something like this: > AddEncoding gzip .jgz > AddType text/javascript .jgz I think that that sets some response headers, depending on what the request was. > And this little change enables that Safari acknowledge gzip type and it also > uploads it. > So the question is what are the possibilities on Nginx? Can we somehow add > support for additional extensions? What request do you make in Safari? What response (http headers) do you get to that request? What response do you want to get instead? If you know that an Apache returns the response that you want, then showing that output might answer the third question. Alternatively, if Apache or nginx change their response based on the User-Agent, you can try using "curl" to make the requests, using varying User-Agent headers. > Do you know any solutions? Not yet; I don't know what the problem is. Good luck with it, f -- Francis Daly francis at daoine.org From vikrant.thakur at gmail.com Tue May 17 17:33:24 2016 From: vikrant.thakur at gmail.com (vikrant singh) Date: Tue, 17 May 2016 10:33:24 -0700 Subject: Websocket Validation Message-ID: Hello, I use nginx as a proxy, and establish a webscoket between client and backend.I validate user's cookies before establish WS and when WS is in use I validate cookies on backend periodically. With this set up in place when a user's cookie expires , an established WS will remain in use until next validation check on backend happens. How I can fix this? Thanks, Vikrant -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 17 17:37:14 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 17 May 2016 18:37:14 +0100 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: References: <20160514091940.GA3477@daoine.org> <72CA41F5-1075-4472-B906-4F8007484DED@autodist.com> Message-ID: <20160517173714.GF3477@daoine.org> On Mon, May 16, 2016 at 08:50:44AM -0400, Alex Hall wrote: Hi there, > Well, it seems to be working now, and I'm thoroughly embarrassed about it. > The Nginx/Apache setup is fine, and has been, it seems. Thanks for reporting that result -- it'll help the future reader of the mailing list (http://m.xkcd.com/979/). > I re-granted > privileges to my OST user, and suddenly the installation completed. You have a working system, so it is sensible to leave well-enough alone. But if you feel like further investigations -- does it now work for you with nginx/fastcgi, using an nginx config like the one previously posted? Cheers, f -- Francis Daly francis at daoine.org From ahall at autodist.com Tue May 17 17:48:52 2016 From: ahall at autodist.com (Alex Hall) Date: Tue, 17 May 2016 13:48:52 -0400 Subject: Serving website with Apache, with Nginx as interface? In-Reply-To: <20160517173714.GF3477@daoine.org> References: <20160514091940.GA3477@daoine.org> <72CA41F5-1075-4472-B906-4F8007484DED@autodist.com> <20160517173714.GF3477@daoine.org> Message-ID: On Tue, May 17, 2016 at 1:37 PM, Francis Daly wrote: > On Mon, May 16, 2016 at 08:50:44AM -0400, Alex Hall wrote: > > Hi there, > > > Well, it seems to be working now, and I'm thoroughly embarrassed about > it. > > The Nginx/Apache setup is fine, and has been, it seems. > > Thanks for reporting that result -- it'll help the future reader of the > mailing list (http://m.xkcd.com/979/). > > > I re-granted > > privileges to my OST user, and suddenly the installation completed. > > You have a working system, so it is sensible to leave well-enough alone. > > But if you feel like further investigations -- does it now work for you > with nginx/fastcgi, using an nginx config like the one previously posted? > I haven't tried it. As you said, it's working now. If it were a matter of two supported servers--Nginx and Apache--I'd try it. But whenever I read about it, someone has some obscure problem that doesn't crop up until later, and I don't want to risk my work's switching to a system that starts behaving unexpectedly seemingly out of nowhere. That said, I feel like setting up a FastCGI setup would be easier now that I've gone through UWSGI for one subdomain and Apache for another, both behind Nginx. > > Cheers, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Alex Hall Automatic Distributors, IT department ahall at autodist.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 17 18:55:39 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 17 May 2016 19:55:39 +0100 Subject: Websocket Validation In-Reply-To: References: Message-ID: <20160517185539.GG3477@daoine.org> On Tue, May 17, 2016 at 10:33:24AM -0700, vikrant singh wrote: Hi there, > I use nginx as a proxy, and establish a webscoket between client and > backend.I validate user's cookies before establish WS and when WS is in > use I validate cookies on backend periodically. > > With this set up in place when a user's cookie expires , an established > WS will remain in use until next validation check on backend happens. The nginx model is more or less: establish the websocket connection and treat it as an opaque tunnel. > How I can fix this? Once the connection is established, you do your own control. Possibly doing the backend validation check more frequently will help? Or whatever it is that decides that the cookie has expired, could let the backend know to close the connection now (or could invite the backend to do a validation check now)? f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 17 23:02:31 2016 From: nginx-forum at forum.nginx.org (fengli) Date: Tue, 17 May 2016 19:02:31 -0400 Subject: Is there a way to send back HTTP2 trailers? In-Reply-To: <1926439.7GAGGd7CWy@vbart-workstation> References: <1926439.7GAGGd7CWy@vbart-workstation> Message-ID: Is there any plan to support it? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266933,266968#msg-266968 From nginx-forum at forum.nginx.org Wed May 18 04:41:30 2016 From: nginx-forum at forum.nginx.org (neoelit) Date: Wed, 18 May 2016 00:41:30 -0400 Subject: Weak ETags and on-the-fly gzipping In-Reply-To: <20160517150019.GE36620@mdounin.ru> References: <20160517150019.GE36620@mdounin.ru> Message-ID: <563285afdd67be2fd0823f7a1eec35bf.NginxMailingListEnglish@forum.nginx.org> Thanks for the quick reply. :) Yeah so nginx shouldn't be the issue. I'm trying to figure why If-None-Match header not reaching rails on passenger-standalone. Since it has nginx version 1.8+ this should be the issue. https://groups.google.com/forum/#!topic/phusion-passenger/eZgw3TqrfSI Posted at Nginx Forum: https://forum.nginx.org/read.php?2,240120,266970#msg-266970 From nginx-forum at forum.nginx.org Wed May 18 04:43:32 2016 From: nginx-forum at forum.nginx.org (neoelit) Date: Wed, 18 May 2016 00:43:32 -0400 Subject: Weak ETags and on-the-fly gzipping In-Reply-To: <563285afdd67be2fd0823f7a1eec35bf.NginxMailingListEnglish@forum.nginx.org> References: <20160517150019.GE36620@mdounin.ru> <563285afdd67be2fd0823f7a1eec35bf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <960a383d74738f58928ba21237654fdc.NginxMailingListEnglish@forum.nginx.org> neoelit Wrote: ------------------------------------------------------- > Thanks for the quick reply. :) > Yeah so nginx shouldn't be the issue. I'm trying to figure why > If-None-Match header not reaching rails on passenger-standalone. Since > it has nginx version 1.8+ this should be the issue. I mean this shouldn't be the issue :P > https://groups.google.com/forum/#!topic/phusion-passenger/eZgw3TqrfSI Posted at Nginx Forum: https://forum.nginx.org/read.php?2,240120,266971#msg-266971 From nginx-forum at forum.nginx.org Wed May 18 07:37:51 2016 From: nginx-forum at forum.nginx.org (mcofko) Date: Wed, 18 May 2016 03:37:51 -0400 Subject: Gzip issue with Safari In-Reply-To: <20160517172427.GE3477@daoine.org> References: <20160517172427.GE3477@daoine.org> Message-ID: <2c838b0b9f31473bb8832da191d4dc0b.NginxMailingListEnglish@forum.nginx.org> Let me try to explain it a little more detailed. My main problem: "But the problem appears in Safari which doesn't accept .gz file type extension", which means although we have enabled gziping on nginx and it's working on Chrome perfectly (we know this looking at network tab in developer tools, where we can see the size of the gzipped file one.dds that needed to be downloaded and it's around 2MB while the original one.dds filesize is around 4MB). I also checkede the header of the file, and Accept-Encoding has values GZIP, deflate, sdch. But the same file loaded on iMac with Safari browser takes/loads 4MB, which clearly states that the Safari somehow doesn't accept or ignores files with .gz extension. I also checked file header and Accept-Encoding has: GZIP, deflate. The same problems were already reported on some other forums, but they were able to solve the problem, because they're using Apache, which enables adding options as: AddEncoding gzip .jsz AddType text/javascript .jsz I hope this links will help you understand my problem: http://blog.bigsmoke.us/2012/01/16/safari-ignores-content-type-for-gz-suffix http://stackoverflow.com/questions/962721/pre-compressed-gzip-break-on-chrome-why regards, m Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266915,266977#msg-266977 From nginx-forum at forum.nginx.org Wed May 18 08:50:24 2016 From: nginx-forum at forum.nginx.org (muzi) Date: Wed, 18 May 2016 04:50:24 -0400 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) Message-ID: Dear Guys, I am facing strange issue, during load test and peak load of more than 3k concurrent users, get below errors in nginx logs continuously. 2016/05/18 11:23:28 [error] 15510#0: *6853 upstream prematurely closed connection while reading response header from upstream, client: x.x.x.x, server:abc.com, request: "GET / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "10.50.x.x" And beside of above error frequently, below error come gradually 2016/05/18 11:23:28 [error] 15510#0: *7431 recv() failed (104: Connection reset by peer) while reading response header from upstream, client: x.x.x.x, server: abc.com, request: "POST / HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "10.50.x.x", referrer: "http://10.50.x.x" But with out peak load, no error is coming and during load besides of above errors continues coming site is working fine and i see 200 response code in access logs, but still wondering why above errors coming only in peak load or during load testing. Below is the require info. I already tune up the sysctl settings for high throughput nginx version: nginx/1.8.1 php-fpm /php version : 5.4.45 echo 0 > /proc/sys/net/ipv4/tcp_ecn echo 1 > /proc/sys/net/ipv4/tcp_tw_reuse echo 20 > /proc/sys/net/ipv4/tcp_fin_timeout echo 102400 > /proc/sys/net/ipv4/tcp_max_syn_backlog echo 102400 > /proc/sys/net/core/somaxconn echo 102400 > /proc/sys/net/core/netdev_max_backlog echo 5242888 > /proc/sys/net/netfilter/nf_conntrack_max echo 400000 > /proc/sys/net/ipv4/tcp_max_tw_buckets echo 655300 > /proc/sys/vm/max_map_count echo 11000 65000 > /proc/sys/net/ipv4/ip_local_port_range Kindly please help & advise. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266978,266978#msg-266978 From vbart at nginx.com Wed May 18 11:29:38 2016 From: vbart at nginx.com (Valentin V. Bartenev) Date: Wed, 18 May 2016 14:29:38 +0300 Subject: Is there a way to send back HTTP2 trailers? In-Reply-To: References: <1926439.7GAGGd7CWy@vbart-workstation> Message-ID: <1653355.Z9LgNcOH3m@vbart-workstation> On Tuesday 17 May 2016 19:02:31 fengli wrote: > Is there any plan to support it? > Currently there are no such plans. wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Wed May 18 15:35:48 2016 From: nginx-forum at forum.nginx.org (w_boba) Date: Wed, 18 May 2016 11:35:48 -0400 Subject: include file with "if" statements Message-ID: I have several different "templates" for location and most of templates have common part like this: ---------------------- /etc/nginx/nginx.conf: http { include /etc/nginx/conf.d/*.conf; } ---------------------- /etc/nginx/conf.d/config12345.conf: server { listen 12345; location / { some configuration; } location /special { include template12345.txt; } ---------------------- /etc/nginx/template12345.txt: some configuration; if ($variable1 = "value1") { return 403; } if ($variable2 = "value2") { return 403; } if ($variable3 = "value3") { return 403; } some other configuration; ---------------------- When I try to separate this "if" part into separate file like this: ---------------------- /etc/nginx/conditions.txt if ( $variable1 = "value1" ) { return 403; } if ( $variable2 = "value2" ) { return 403; } if ( $variable3 = "value3" ) { return 403; } ---------------------- and include it in the template like "include conditions.txt" instead of repeating this part in every template, I get error message: "nginx: [emerg] "if" directive is not allowed here in /etc/nginx/conditions.txt:1" So my question is: is there a limit to "include" directive depth ? Why am I getting this error ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266986,266986#msg-266986 From francis at daoine.org Wed May 18 16:33:49 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 18 May 2016 17:33:49 +0100 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) In-Reply-To: References: Message-ID: <20160518163349.GH3477@daoine.org> On Wed, May 18, 2016 at 04:50:24AM -0400, muzi wrote: Hi there, > I am facing strange issue, during load test and peak load of more than 3k > concurrent users, get below errors in nginx logs continuously. > > 2016/05/18 11:23:28 [error] 15510#0: *6853 upstream prematurely closed > connection while reading response header from upstream, client: x.x.x.x, > server:abc.com, request: "GET / HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "10.50.x.x" "upstream" here is your fastcgi server. The nginx message is that the fastcgi server broke the connection. > And beside of above error frequently, below error come gradually > > 2016/05/18 11:23:28 [error] 15510#0: *7431 recv() failed (104: Connection > reset by peer) while reading response header from upstream, client: x.x.x.x, > server: abc.com, request: "POST / HTTP/1.1", upstream: > "fastcgi://127.0.0.1:9000", host: "10.50.x.x", referrer: "http://10.50.x.x" And that message is that the fasctcgi server broke the connection. It is probably worth investigating what the fastcgi server thinks is happening -- perhaps it is not configured to handle the load it is being given? f -- Francis Daly francis at daoine.org From francis at daoine.org Wed May 18 17:04:25 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 18 May 2016 18:04:25 +0100 Subject: Gzip issue with Safari In-Reply-To: <2c838b0b9f31473bb8832da191d4dc0b.NginxMailingListEnglish@forum.nginx.org> References: <20160517172427.GE3477@daoine.org> <2c838b0b9f31473bb8832da191d4dc0b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160518170425.GI3477@daoine.org> On Wed, May 18, 2016 at 03:37:51AM -0400, mcofko wrote: Hi there, > enabled gziping on nginx and it's working on Chrome perfectly (we know this > looking at network tab in developer tools, where we can see the size of the > gzipped file one.dds that needed to be downloaded and it's around 2MB while > the original one.dds filesize is around 4MB). I think this means that you can see the http request that Chrome makes, and you can see the response that nginx gives, and that they are what you want. Can you use "curl" to make the same request, and see the same response? That usually makes it easier to see what exactly is going on, because there are fewer layers in the way. (I'm not sure whether your nginx is serving the content of the file one.dds.gz as-is, or is serving the content of the file one.dds and gzip'ing it on-the-fly. Maybe it makes a difference; maybe it doesn't.) > I also checkede the header of > the file, and Accept-Encoding has values GZIP, deflate, sdch. I think that that refers to the http request header; with luck you will be able to reproduce that in curl, where the full communication can be copy-paste'd. > But the same file loaded on iMac with Safari browser takes/loads 4MB, which > clearly states that the Safari somehow doesn't accept or ignores files with > .gz extension. I also checked file header and Accept-Encoding has: GZIP, > deflate. Can you see the differences between the request that Chrome made and the request that Safari made? That might indicate why nginx gave different responses. If you can re-create both requests in curl you'll be able to see the different responses from your nginx, and maybe that will give you a hint as to where the problem is. > The same problems were already reported on some other forums, but they were > able to solve the problem, because they're using Apache, which enables > adding options as: > AddEncoding gzip .jsz > AddType text/javascript .jsz I don't see those other forums you mention. I don't see that those options will make a difference unless you are actually requesting /one.jsz or /one.dds.jsz. But perhaps there are extra things happening in the apache config that makes things work. nginx does have an add_header directive to add general headers; and has a types directive to set one header based on filename extension. It may be that some combination of those could help you; but I suspect that the full answer will be different. > I hope this links will help you understand my problem: > http://blog.bigsmoke.us/2012/01/16/safari-ignores-content-type-for-gz-suffix > http://stackoverflow.com/questions/962721/pre-compressed-gzip-break-on-chrome-why They seem to say that you shouldn't ask Safari to request /one.js.gz if you want Safari to use the response as a script. That doesn't seem related to the rest of your email. I suspect I'm missing something. And I still don't know what request you make, what response you get, or what response you want. So I'll let someone else get involved. Cheers, f -- Francis Daly francis at daoine.org From mdounin at mdounin.ru Wed May 18 18:28:46 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 18 May 2016 21:28:46 +0300 Subject: include file with "if" statements In-Reply-To: References: Message-ID: <20160518182845.GJ36620@mdounin.ru> Hello! On Wed, May 18, 2016 at 11:35:48AM -0400, w_boba wrote: > I have several different "templates" for location and most of templates have > common part like this: > > ---------------------- > /etc/nginx/nginx.conf: > http { > include /etc/nginx/conf.d/*.conf; > } > ---------------------- > /etc/nginx/conf.d/config12345.conf: > server { > listen 12345; > location / { > some configuration; > } > location /special { > include template12345.txt; > } > ---------------------- > /etc/nginx/template12345.txt: > some configuration; > if ($variable1 = "value1") { return 403; } > if ($variable2 = "value2") { return 403; } > if ($variable3 = "value3") { return 403; } > some other configuration; > ---------------------- > > When I try to separate this "if" part into separate file like this: > ---------------------- > /etc/nginx/conditions.txt > if ( $variable1 = "value1" ) { return 403; } > if ( $variable2 = "value2" ) { return 403; } > if ( $variable3 = "value3" ) { return 403; } > ---------------------- > > and include it in the template like "include conditions.txt" instead of > repeating this part in every template, I get error message: > "nginx: [emerg] "if" directive is not allowed here in > /etc/nginx/conditions.txt:1" > > So my question is: is there a limit to "include" directive depth ? No. > Why am I getting this error ? Likely you've uninintentionally included conditions.txt into a place where "if" directives cannot be used. In particular, this can easily happen if you have various wildcard includes in your configuration. If unsure, please provide full minimal configuration which triggers the error. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Wed May 18 19:03:32 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 18 May 2016 20:03:32 +0100 Subject: Fully hiding localhost:8080 with Nginx as reverse proxy? In-Reply-To: References: Message-ID: <20160518190332.GJ3477@daoine.org> On Mon, May 16, 2016 at 05:00:06PM -0400, Alex Hall wrote: Hi there, > trying to send mail from OSTicket, and while OST claims success, I'm not > seeing any messages coming to my account. I was doing some searching and > found that this can happen if the email appears to come from localhost, or > some other invalid domain. I'm pretty sure that a php application sending mail is unrelated to the nginx that is in front of it (whether by fastcgi_pass or proxy_pass). You'll probably have better luck asking your search engine of choice for things like "send email from osticket". Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed May 18 20:44:59 2016 From: nginx-forum at forum.nginx.org (w_boba) Date: Wed, 18 May 2016 16:44:59 -0400 Subject: include file with "if" statements In-Reply-To: <20160518182845.GJ36620@mdounin.ru> References: <20160518182845.GJ36620@mdounin.ru> Message-ID: <0b2919336ccff8e042efeedf88114df9.NginxMailingListEnglish@forum.nginx.org> Thank you for the direction where to look. You were right. It was too wide of a wildcard in http{} section outside of server/location. Fixed now. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266986,266991#msg-266991 From preety at mordani.com Wed May 18 23:55:22 2016 From: preety at mordani.com (Preety Mordani) Date: Wed, 18 May 2016 16:55:22 -0700 Subject: TLS for a generic TCP session? Message-ID: Hey All, I was looking over the feature differences between nginx and nginx-plus and its not clear to me if nginx by itself supports TLS termination for a generic TCP session. If it does not support TLS termination for TCP sessions, then are my only other options : 1. building the mainline source with the following options to configure? or 2. use nginx-plus download? Could someone kindly clarify? -------------- next part -------------- An HTML attachment was scrubbed... URL: From preety at mordani.com Wed May 18 23:57:38 2016 From: preety at mordani.com (Preety Mordani) Date: Wed, 18 May 2016 16:57:38 -0700 Subject: TLS for a generic TCP session? In-Reply-To: References: Message-ID: Hey All, I was looking over the feature differences between nginx and nginx-plus and its not clear to me if nginx by itself supports TLS termination for a generic TCP session. If it does not support TLS termination for TCP sessions, then are my only other options : 1. building the mainline source with the following options to configure? ./configure --sbin-path=/usr/local/nginx/nginx --conf-path=/usr/local/nginx/bginx.conf --pid-path=/usr/local/nginx/nginx.pid --with-pcre=../pcre-8.38 --with-zlib=../zlib-1.2.8 --with-http_ssl_module *--with-stream* --with-mail=dynamic --with-openssl=/home/admin/iwan-30/openssl-1.0.2f/ *--with-stream_ssl_module* or 2. use nginx-plus download? Could someone kindly clarify? Thanks Preety -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu May 19 08:31:14 2016 From: nginx-forum at forum.nginx.org (muzi) Date: Thu, 19 May 2016 04:31:14 -0400 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) In-Reply-To: <20160518163349.GH3477@daoine.org> References: <20160518163349.GH3477@daoine.org> Message-ID: <202dea0e85e10c178812d75f65c309cf.NginxMailingListEnglish@forum.nginx.org> Dear Francis Daly , thankyou for your response, upstream is php-fpm server (127.0.0.1) and its also setup to handle huge load, during above errors, i see the site is opening fine. but don't know where to catch it. PM is also set on demand and maximum childerns set to 90000 to handle the load, and no logs appears in php-fpm. Can you please suggest how to got the issue ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266978,266995#msg-266995 From maxim at nginx.com Thu May 19 08:38:53 2016 From: maxim at nginx.com (Maxim Konovalov) Date: Thu, 19 May 2016 11:38:53 +0300 Subject: TLS for a generic TCP session? In-Reply-To: References: Message-ID: <1b136f94-8eae-9bae-bd1e-d01b8b9b35db@nginx.com> Hi Preety, On 5/19/16 2:57 AM, Preety Mordani wrote: > > > Hey All, > > I was looking over the feature differences between nginx and > nginx-plus and its not clear to me if nginx by itself supports TLS > termination for a generic TCP session. > > If it does not support TLS termination for TCP sessions, then are my > only other options : > > 1. building the mainline source with the following options to configure? > > ./configure --sbin-path=/usr/local/nginx/nginx > --conf-path=/usr/local/nginx/bginx.conf > --pid-path=/usr/local/nginx/nginx.pid --with-pcre=../pcre-8.38 > --with-zlib=../zlib-1.2.8 --with-http_ssl_module *--with-stream* > --with-mail=dynamic > --with-openssl=/home/admin/iwan-30/openssl-1.0.2f/ > *--with-stream_ssl_module* > > or > > 2. use nginx-plus download? > > Could someone kindly clarify? > There are no differences between -plus and -oss in this area -- both do support TLS termination in the stream module. Yes, you are right, you need "--with-stream_ssl_module" configure flag. You can find more information about our nginx-oss packages that we build and ship here: http://nginx.org/en/linux_packages.html#mainline -- Maxim Konovalov From nginx-forum at forum.nginx.org Thu May 19 11:44:01 2016 From: nginx-forum at forum.nginx.org (yogeshorai) Date: Thu, 19 May 2016 07:44:01 -0400 Subject: =?UTF-8?Q?Re=3A_Range_request_with_Accept-Encoding=3A=E2=80=9Cgzip=2Cdefla?= =?UTF-8?Q?te=22_for_*=2Epdf=2E*_file-name_pattern_returns_416?= In-Reply-To: <20160510144105.GI36620@mdounin.ru> References: <20160510144105.GI36620@mdounin.ru> Message-ID: Thnx maxim for pointing it out. We investigated further and issue seems to be at our back-end server. Thanks a lot for your help Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266713,266998#msg-266998 From nginx-forum at forum.nginx.org Thu May 19 15:11:47 2016 From: nginx-forum at forum.nginx.org (ohenley) Date: Thu, 19 May 2016 11:11:47 -0400 Subject: CPU load monitoring / dynamically limit number of connections to server Message-ID: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> Hi to all, Is there a way to monitor the busyness of my dedicated server cpu cores and stop serving new connections passed a given cpu load threshold? Put another way, what is the standard approach/technique to dynamically limit the maximum number of connections my machine can cope with? Thank you, ohenley Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267003,267003#msg-267003 From preety at mordani.com Thu May 19 15:50:32 2016 From: preety at mordani.com (Preety Mordani) Date: Thu, 19 May 2016 08:50:32 -0700 Subject: TLS for a generic TCP session? In-Reply-To: <1b136f94-8eae-9bae-bd1e-d01b8b9b35db@nginx.com> References: <1b136f94-8eae-9bae-bd1e-d01b8b9b35db@nginx.com> Message-ID: Hi Maxim, Thanks for your email. I will take a look. Best, Preety On Thu, May 19, 2016 at 1:38 AM, Maxim Konovalov wrote: > Hi Preety, > > On 5/19/16 2:57 AM, Preety Mordani wrote: > > > > > > Hey All, > > > > I was looking over the feature differences between nginx and > > nginx-plus and its not clear to me if nginx by itself supports TLS > > termination for a generic TCP session. > > > > If it does not support TLS termination for TCP sessions, then are my > > only other options : > > > > 1. building the mainline source with the following options to configure? > > > > ./configure --sbin-path=/usr/local/nginx/nginx > > --conf-path=/usr/local/nginx/bginx.conf > > --pid-path=/usr/local/nginx/nginx.pid --with-pcre=../pcre-8.38 > > --with-zlib=../zlib-1.2.8 --with-http_ssl_module *--with-stream* > > --with-mail=dynamic > > --with-openssl=/home/admin/trial/openssl-1.0.2f/ > > *--with-stream_ssl_module* > > > > or > > > > 2. use nginx-plus download? > > > > Could someone kindly clarify? > > > There are no differences between -plus and -oss in this area -- both > do support TLS termination in the stream module. > > Yes, you are right, you need "--with-stream_ssl_module" configure flag. > > You can find more information about our nginx-oss packages that we > build and ship here: > > http://nginx.org/en/linux_packages.html#mainline > > -- > Maxim Konovalov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu May 19 19:28:35 2016 From: francis at daoine.org (Francis Daly) Date: Thu, 19 May 2016 20:28:35 +0100 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) In-Reply-To: <202dea0e85e10c178812d75f65c309cf.NginxMailingListEnglish@forum.nginx.org> References: <20160518163349.GH3477@daoine.org> <202dea0e85e10c178812d75f65c309cf.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160519192835.GL3477@daoine.org> On Thu, May 19, 2016 at 04:31:14AM -0400, muzi wrote: Hi there, > upstream is php-fpm server > (127.0.0.1) and its also setup to handle huge load, during above errors, i > see the site is opening fine. but don't know where to catch it. PM is also > set on demand and maximum childerns set to 90000 to handle the load, and no > logs appears in php-fpm. Can you please suggest how to got the issue ? nginx says that upstream is doing something odd. upstream is php-fpm. I suggest checking the php-fpm docs to see how to get it to log what it is doing -- if it says nginx is doing something odd, then there is a conflict to investigate. Otherwise, it should says something about the connections that it is closing. If the site is opening fine, maybe there is no problem to worry about. f -- Francis Daly francis at daoine.org From anoopalias01 at gmail.com Fri May 20 18:26:00 2016 From: anoopalias01 at gmail.com (Anoop Alias) Date: Fri, 20 May 2016 23:56:00 +0530 Subject: CPU load monitoring / dynamically limit number of connections to server In-Reply-To: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> References: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> Message-ID: http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html - not system load based though -- Anoop P Alias From lists at lazygranch.com Fri May 20 19:46:13 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Fri, 20 May 2016 12:46:13 -0700 Subject: CPU load monitoring / dynamically limit number of connections to server In-Reply-To: References: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160520194613.5468245.99718.3261@lazygranch.com> ?Bear in mind one IP can be many eyeballs. I use the module with a setting of 10 per IP. I set the firewall to a higher limit to allow some non-web services, but not infinite. This can fight back a very unsophisticated DOS attack. A real DOS is distributed, so the IP limit won't be useful.? I had a document hit Twitter and their servers hammered my lowly VPS. Besides an IP limit, I suggest a rewrite to eliminate hot linking, which effectively is what Twitter can do. If they tweet a link to a webpage, no problem. That would limit those twitter users to each individually set their browser ?to a webpage, which slows the requests. Out of paranoia, I blocked all of Twitter IP space. The same for Facebook. Again, the eyeballs can use their ISP via a link. I'm not comfortable with social media companies directly accessing my server since they have huge data bandwidth. That leaves large corporations and universities as the situation where one IP is really many eyeballs. A connection limit of 10 will be too low in these cases occasionally, but you have to set the limit somewhere. ? Original Message ? From: Anoop Alias Sent: Friday, May 20, 2016 11:26 AM To: Nginx Reply To: nginx at nginx.org Subject: Re: CPU load monitoring / dynamically limit number of connections to server http://nginx.org/en/docs/http/ngx_http_limit_conn_module.html - not system load based though -- Anoop P Alias _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri May 20 19:45:15 2016 From: nginx-forum at forum.nginx.org (bclod) Date: Fri, 20 May 2016 15:45:15 -0400 Subject: Errorlog shows GET timeout; Accesslog shows POST Message-ID: <29fa9139f7415d15b32ad4d20fcfe311.NginxMailingListEnglish@forum.nginx.org> Hello! Background We've implemented a lightweight APIGW on nginx 1.9.9 using openresty packages to customize the handling/proxying logic. We have dozens clients that are able to leverage this implementation just fine. The Issue We have one client that is experiencing some weird issues when trying to access our APIGW. This is only for certain requests coming from their app. We strongly believe it's a malformed request from their side but we're trying to help them find the issue. The behavior we see on our side is quite strange as well. The error.log will show a GET request that timed out. The access.log shows a POST request for the same request. Below are the two entries; the response time always seems to be around 5 seconds; but lowest timeout explicitly set in our configuration is 10 seconds. Error log entry 2016/05/18 11:34:51 [info] 13516#0: *304290 client prematurely closed connection, client: xx.xxx.xx.xxx, server: , request: "GET /hidden_uri HTTP/1.1", host: "blahblahblah.com" Access log entry |LOCATION:GATEWAY|CN:NA|SSLPROTOCOL:TLSv1|SSLCIPHER:ECDHE-RSA-AES256-SHA|SERVICE:NA|VERSION:1|CLIENT:NA|BACKEND:|HTTPMETHOD:POST|ACCEPT:application/json; v=4|OPERATION:NA|RESPONSETIME:5005|STATUS:400|SEVERITY:NA|STATUSCODE:NA|STATUSMESSAGE:NA|CLIENTIPADDRESS:xx.xxx.xx.xxx|CLIENTMESSAGEID:NA|MESSAGEID:|REQUESTBODYSIZE:0|RESPONSEBODYSIZE:0 Configuration for access log '$timestamp|LOCATION:$location_name|CN:$cn|SSLPROTOCOL:$ssl_protocol|SSLCIPHER:$ssl_cipher|SERVICE:$service_name|VERSION:$version|CLIENT:$upstream_api_key|BACKEND:$route_location|HTTPMETHOD:$upstream_method|ACCEPT:$accept|OPERATION:$operation|RESPONSETIME:$request_time_ms|STATUS:$status|SEVERITY:$severity|STATUSCODE:$status_code|STATUSMESSAGE:$status_message|CLIENTIPADDRESS:$clientip|CLIENTMESSAGEID:$clientmessageid|MESSAGEID:$messageid|REQUESTBODYSIZE:$content_length|RESPONSEBODYSIZE:$body_bytes_sent'; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267030,267030#msg-267030 From nginx-forum at forum.nginx.org Fri May 20 02:50:24 2016 From: nginx-forum at forum.nginx.org (ohenley) Date: Thu, 19 May 2016 22:50:24 -0400 Subject: CPU load monitoring / dynamically limit number of connections to server In-Reply-To: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> References: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> Message-ID: <94f1f9403b14d3280e11750fb5bd067e.NginxMailingListEnglish@forum.nginx.org> Found it: https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267003,267019#msg-267019 From nginx-forum at forum.nginx.org Fri May 20 05:16:56 2016 From: nginx-forum at forum.nginx.org (gatha) Date: Fri, 20 May 2016 01:16:56 -0400 Subject: nginx reverse proxy 502 bad gateway error In-Reply-To: <1a49240eaff9bfe3f4d44ac9596f3ce5.NginxMailingListEnglish@forum.nginx.org> References: <20130412163448.GQ62550@mdounin.ru> <1a49240eaff9bfe3f4d44ac9596f3ce5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9133629adbb241615e81bd7e5a709541.NginxMailingListEnglish@forum.nginx.org> Please help me too. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,238322,267020#msg-267020 From lists at lazygranch.com Sun May 22 01:38:42 2016 From: lists at lazygranch.com (lists at lazygranch.com) Date: Sat, 21 May 2016 18:38:42 -0700 Subject: CPU load monitoring / dynamically limit number of connections to server In-Reply-To: <94f1f9403b14d3280e11750fb5bd067e.NginxMailingListEnglish@forum.nginx.org> References: <36f3314b125706276d7a2c8616b2e764.NginxMailingListEnglish@forum.nginx.org> <94f1f9403b14d3280e11750fb5bd067e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160522013842.5468245.57643.3328@lazygranch.com> Does that really limit per the OP's request? Reading the document, it seems like a monitoring module, but not something that limits nginx. For example, ulimit is just a linux command. Maybe you could set a chron job to sniff the load based on this monitoring and adjust system parameters accordingly. (Beware of positive feedback loops.! This was my method to set limits on nginx. First, download http://www.labs.hpe.com/research/linux/httperf/ I found my server could handle 3000 requests per minute. Note this is running httperf ON the server itself. This does not include the connection to the Internet. But now you know how much abuse the server can take.? Then I ran httperf over the Internet from a number of different ISPs. The performace was more like 100 requests per second at best. Clearly the pipe I share is the limiting factor. ?I pulled a somewhat arbitrary 10 users at a time limit, which is actually real life most of the time. I'm not CNN or a porn site. So I set the limit request to 10 per second. Remember every photo, document, etc. takes one request.? This is barely scientific, but it is better than nothing. ? ? Original Message ? From: ohenley Sent: Saturday, May 21, 2016 12:37 PM To: nginx at nginx.org Reply To: nginx at nginx.org Subject: Re: CPU load monitoring / dynamically limit number of connections to server Found it: https://www.scalyr.com/community/guides/how-to-monitor-nginx-the-essential-guide Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267003,267019#msg-267019 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Sun May 22 11:16:35 2016 From: nginx-forum at forum.nginx.org (redrobes) Date: Sun, 22 May 2016 07:16:35 -0400 Subject: Rewrite regex with percent signs Message-ID: <17171c4e16dfb41e9779ddc341ef932c.NginxMailingListEnglish@forum.nginx.org> Hello, I am helping an admin sort out some 404 issues by using some rewrite which have generally been successful. However we have a couple of cases that are a bit mysterious and hope you can help explain. This is from a vbulletin forum that used to use the vbseo extension to make the url's prettier but that extension has been dropped now so posts with those pretty url's don't point the correct places. For example, we have a url of the following: /members/redrobes-albums-2d%20vs%203d%20?-picture12345-mt-pub01.jpg it needs to go to /attachment.php?attachmentid=12345 we have: location /members/ { rewrite ^/members/.+-albums-.+-picture(\d+)-.* /attachment.php?attachmentid=$1? redirect; } and this particular one is not working. It works with many others where the original url did not have the %20's in them. So there is something about those %20's that are causing these to fail. I can write a perl script and run that url through its regex and it does change them. So what does the nginx regex do different from perl regex with regard to % signs. Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267039,267039#msg-267039 From francis at daoine.org Sun May 22 13:05:16 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 22 May 2016 14:05:16 +0100 Subject: Errorlog shows GET timeout; Accesslog shows POST In-Reply-To: <29fa9139f7415d15b32ad4d20fcfe311.NginxMailingListEnglish@forum.nginx.org> References: <29fa9139f7415d15b32ad4d20fcfe311.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160522130516.GM3477@daoine.org> On Fri, May 20, 2016 at 03:45:15PM -0400, bclod wrote: Hi there, > We've implemented a lightweight APIGW on nginx 1.9.9 using openresty > packages to customize the handling/proxying logic. We have dozens clients > that are able to leverage this implementation just fine. I suspect that the details of the request/response, or at least the details of what the request-data flow-response is supposed to be, will be useful in finding and fixing the problem. > We have one client that is experiencing some weird issues when trying to > access our APIGW. This is only for certain requests coming from their app. > We strongly believe it's a malformed request from their side but we're > trying to help them find the issue. Do you know what your side expects the request to be? (your API design.) Can you see what request your side receives? (tcpdump/ssldump, or perhaps nginx-side extra logging.) If they differ, that points at where the problem is. > The behavior we see on our side is > quite strange as well. The error.log will show a GET request that timed > out. The error log entry shows that nginx thinks that the requesting client closed the connection before nginx was ready to close the connection. That might be "timed out", or might be "error". > The access.log shows a POST request for the same request. Below are > the two entries; the response time always seems to be around 5 seconds; but > lowest timeout explicitly set in our configuration is 10 seconds. The access log seems very customised to your site; and includes some variables that are not nginx-default. Perhaps they are openresty-default? (Google doesn't make it look like they obviously are.) You say "timeout explicitly set in our configuration". If it was the client that closed the connection because its idea of a timeout passed, then your configuration is irrelevant unless it sets the client-side timeout. > |LOCATION:GATEWAY|CN:NA|SSLPROTOCOL:TLSv1|SSLCIPHER:ECDHE-RSA-AES256-SHA|SERVICE:NA|VERSION:1|CLIENT:NA|BACKEND:|HTTPMETHOD:POST|ACCEPT:application/json; HTTPMETHOD there is $upstream_method, which is not (by default) what the client sent to nginx. I think that there's not enough information here to allow someone else discover the problem. If you can make a reproducible failure case, that will probably make it easier to discover. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Sun May 22 13:42:17 2016 From: francis at daoine.org (Francis Daly) Date: Sun, 22 May 2016 14:42:17 +0100 Subject: Rewrite regex with percent signs In-Reply-To: <17171c4e16dfb41e9779ddc341ef932c.NginxMailingListEnglish@forum.nginx.org> References: <17171c4e16dfb41e9779ddc341ef932c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160522134217.GN3477@daoine.org> On Sun, May 22, 2016 at 07:16:35AM -0400, redrobes wrote: Hi there, > For example, we have a url of the following: > /members/redrobes-albums-2d%20vs%203d%20?-picture12345-mt-pub01.jpg The %20 pieces in there are url-encoded spaces. In a "location" or a "rewrite", you would have to match a single space character each. However, there is also a ? in the url; that marks the start of the query string. A "location" or "rewrite" in nginx will *not* consider that part of the url. > it needs to go to > > /attachment.php?attachmentid=12345 It is not immediately clear to me which parts of the original url are important in deciding whether the request should be redirected or not. > we have: > > location /members/ { > rewrite ^/members/.+-albums-.+-picture(\d+)-.* > /attachment.php?attachmentid=$1? redirect; > } That suggests that just those three words matter. You might be able to put something together involving "$args" matching "-picture(\d+)-" if the request matches "^/members/.*-albums-", perhaps? Alternatively, perhaps the thing that created the url in the first place, incorrectly did not url-encode the ? to %3F. > and this particular one is not working. It works with many others where the > original url did not have the %20's in them. So there is something about > those %20's that are causing these to fail. I suspect that it is the ? rather than the %20, from the one example you have given. > I can write a perl script and run that url through its regex and it does > change them. > > So what does the nginx regex do different from perl regex with regard to % > signs. With regard to % signs, nginx regex uses the %-unencoded version. With regard to ?, some nginx parts do not consider anything after the ? when matching. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Sun May 22 17:18:19 2016 From: nginx-forum at forum.nginx.org (redrobes) Date: Sun, 22 May 2016 13:18:19 -0400 Subject: Rewrite regex with percent signs In-Reply-To: <20160522134217.GN3477@daoine.org> References: <20160522134217.GN3477@daoine.org> Message-ID: <956bface6ac34b9a8182a3663af1eff0.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Sun, May 22, 2016 at 07:16:35AM -0400, redrobes wrote: > > Hi there, > > > For example, we have a url of the following: > > /members/redrobes-albums-2d%20vs%203d%20?-picture12345-mt-pub01.jpg > > The %20 pieces in there are url-encoded spaces. In a "location" or a > "rewrite", you would have to match a single space character each. > > However, there is also a ? in the url; that marks the start of the > query > string. A "location" or "rewrite" in nginx will *not* consider that > part > of the url. Ah ! Thanks. I didn't spot that one amongst all the other odd chars there. Yes nginx does indeed treat the ? as a different character to perl and the hex codes convert to "2d vs 3d ?" I.e. the ? was in the title of the post and is not the start of args. I think thats it and it makes sense now. I can understand what is going on. It could have been mighty hard to fix this case since we are not going to know in advance whether the ? was part of the url or the start of args but I think in our case we know were going to dump all the args anyway and substitute our own in. So I think there may be the possibility of appending the args to the rewrite before we do the match. Not sure at this point. But thanks Francis - I think you have solved it. > > > it needs to go to > > > > /attachment.php?attachmentid=12345 > > It is not immediately clear to me which parts of the original url are > important in deciding whether the request should be redirected or not. > > > we have: > > > > location /members/ { > > rewrite ^/members/.+-albums-.+-picture(\d+)-.* > > /attachment.php?attachmentid=$1? redirect; > > } > > That suggests that just those three words matter. You might be able > to put something together involving "$args" matching "-picture(\d+)-" > if the request matches "^/members/.*-albums-", perhaps? > > Alternatively, perhaps the thing that created the url in the first > place, > incorrectly did not url-encode the ? to %3F. > > > and this particular one is not working. It works with many others > where the > > original url did not have the %20's in them. So there is something > about > > those %20's that are causing these to fail. > > I suspect that it is the ? rather than the %20, from the one example > you have given. > > > I can write a perl script and run that url through its regex and it > does > > change them. > > > > So what does the nginx regex do different from perl regex with > regard to % > > signs. > > With regard to % signs, nginx regex uses the %-unencoded version. With > regard to ?, some nginx parts do not consider anything after the ? > when > matching. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267039,267044#msg-267044 From pankajitbhu at gmail.com Mon May 23 07:01:00 2016 From: pankajitbhu at gmail.com (Pankaj Chaudhary) Date: Mon, 23 May 2016 12:31:00 +0530 Subject: reading config during run time Message-ID: Hi, In my module i want to read config values run time. is there method available in nginx which i can use. Please suggest. Regards, Pankaj -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 23 08:58:02 2016 From: nginx-forum at forum.nginx.org (muzi) Date: Mon, 23 May 2016 04:58:02 -0400 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) In-Reply-To: References: Message-ID: <787c7040b250c3b14ee37dc2b4ff5f04.NginxMailingListEnglish@forum.nginx.org> Thankyou once again Francis Daly, i am now ignoring these errors, as i not see any errors in nginx access logs and site is working on high load and see 200 response code in access logs. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266978,267053#msg-267053 From nginx-forum at forum.nginx.org Mon May 23 09:13:45 2016 From: nginx-forum at forum.nginx.org (muzi) Date: Mon, 23 May 2016 05:13:45 -0400 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) In-Reply-To: <20160519192835.GL3477@daoine.org> References: <20160519192835.GL3477@daoine.org> Message-ID: Thankyou once again Francis Daly, i am now ignoring these errors, as i not see any errors in nginx access logs and site is working on high load and see 200 response code in access logs. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266978,267051#msg-267051 From nginx-forum at forum.nginx.org Mon May 23 09:13:51 2016 From: nginx-forum at forum.nginx.org (muzi) Date: Mon, 23 May 2016 05:13:51 -0400 Subject: nginx on high load (upstream prematurely closed connection while reading response header from upstream) In-Reply-To: <20160519192835.GL3477@daoine.org> References: <20160519192835.GL3477@daoine.org> Message-ID: <6953db2b4901fbd2c1f1f5bff2ca2c21.NginxMailingListEnglish@forum.nginx.org> Thankyou once again Francis Daly, i am now ignoring these errors, as i not see any errors in nginx access logs and site is working on high load and see 200 response code in access logs. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,266978,267052#msg-267052 From nginx-forum at forum.nginx.org Mon May 23 12:45:50 2016 From: nginx-forum at forum.nginx.org (bclod) Date: Mon, 23 May 2016 08:45:50 -0400 Subject: Errorlog shows GET timeout; Accesslog shows POST In-Reply-To: <20160522130516.GM3477@daoine.org> References: <20160522130516.GM3477@daoine.org> Message-ID: Francis, You are totally right! I forgot that that we were using some crap logic for the variables; we are switching this to $request_method (the actual built in variable). Your notes confirm to me what I was thinking all along: that the client is likely closing the connection after 5 seconds. We are doing a debug session today with tcpdump to get more details. Thanks for setting me straight! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267030,267057#msg-267057 From nginx-forum at forum.nginx.org Mon May 23 16:47:19 2016 From: nginx-forum at forum.nginx.org (drook) Date: Mon, 23 May 2016 12:47:19 -0400 Subject: nginx and HLS Message-ID: <93e900ef42df249395c3e2730fe1a65d.NginxMailingListEnglish@forum.nginx.org> Hi. I see there's a ngx_http_hls_module for HLS, but seems like it's for video content, and I need to serve static mp3/audio files with nginx, prividing some adaptive bitrate adjustment. Is it possible with nginx ? Thanks. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267073,267073#msg-267073 From medvedev.yp at gmail.com Mon May 23 17:42:14 2016 From: medvedev.yp at gmail.com (Yuriy Medvedev) Date: Mon, 23 May 2016 20:42:14 +0300 Subject: nginx and HLS In-Reply-To: <93e900ef42df249395c3e2730fe1a65d.NginxMailingListEnglish@forum.nginx.org> References: <93e900ef42df249395c3e2730fe1a65d.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hi. What protocol do you wanted use? 23 ??? 2016 ?. 19:47 ???????????? "drook" ???????: > Hi. > > I see there's a ngx_http_hls_module for HLS, but seems like it's for video > content, and I need to serve static mp3/audio files with nginx, prividing > some adaptive bitrate adjustment. Is it possible with nginx ? > > Thanks. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,267073,267073#msg-267073 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 23 20:21:12 2016 From: nginx-forum at forum.nginx.org (bclod) Date: Mon, 23 May 2016 16:21:12 -0400 Subject: GET request with transfer-encoding causing issue Message-ID: <3f244049ca0c9dc44b3f0bda56c1e27d.NginxMailingListEnglish@forum.nginx.org> We have a client sending a GET request who is also erroneously sending a transfer-encoding : chunked Header. This is causing nginx to wait for the client to send data before the client finally times out. Is there anyway to tell nginx to ignore this header? We have asked the client to get their act together; but in the mean time was wondering if there was anything we could do. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267076,267076#msg-267076 From ahutchings at nginx.com Tue May 24 10:10:31 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Tue, 24 May 2016 11:10:31 +0100 Subject: GET request with transfer-encoding causing issue In-Reply-To: <3f244049ca0c9dc44b3f0bda56c1e27d.NginxMailingListEnglish@forum.nginx.org> References: <3f244049ca0c9dc44b3f0bda56c1e27d.NginxMailingListEnglish@forum.nginx.org> Message-ID: As if by magic: http://nginx.org/en/docs/http/ngx_http_core_module.html#chunked_transfer_encoding Kind Regards Andrew On 23/05/16 21:21, bclod wrote: > We have a client sending a GET request who is also erroneously sending a > transfer-encoding : chunked Header. > > This is causing nginx to wait for the client to send data before the client > finally times out. Is there anyway to tell nginx to ignore this header? We > have asked the client to get their act together; but in the mean time was > wondering if there was anything we could do. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267076,267076#msg-267076 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Tue May 24 12:17:43 2016 From: nginx-forum at forum.nginx.org (bclod) Date: Tue, 24 May 2016 08:17:43 -0400 Subject: GET request with transfer-encoding causing issue In-Reply-To: References: Message-ID: Hey Andrew, Thanks for your reply. Actually we tried that already and still faced the same issue. It can easily be reproduced; send any GET request with header "transfer-encoding: chunked". We tried the below already. chunked_transfer_encoding off; proxy_buffering off; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267076,267084#msg-267084 From francis at daoine.org Tue May 24 12:21:06 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 24 May 2016 13:21:06 +0100 Subject: GET request with transfer-encoding causing issue In-Reply-To: References: <3f244049ca0c9dc44b3f0bda56c1e27d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160524122106.GO3477@daoine.org> On Tue, May 24, 2016 at 11:10:31AM +0100, Andrew Hutchings wrote: Hi there, > As if by magic: http://nginx.org/en/docs/http/ngx_http_core_module.html#chunked_transfer_encoding Will that work here? That directive seems to be about nginx sending a response to a client, and choosing not to send it "chunked" even though it normally would. This problem seems to be nginx reading a request from a client, where the request is malformed. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 24 12:23:53 2016 From: nginx-forum at forum.nginx.org (bclod) Date: Tue, 24 May 2016 08:23:53 -0400 Subject: GET request with transfer-encoding causing issue In-Reply-To: <20160524122106.GO3477@daoine.org> References: <20160524122106.GO3477@daoine.org> Message-ID: <8d2cff6e8c0ed19ee0f7a34d9110a486.NginxMailingListEnglish@forum.nginx.org> Francis Daly Wrote: ------------------------------------------------------- > On Tue, May 24, 2016 at 11:10:31AM +0100, Andrew Hutchings wrote: > > Hi there, > > > As if by magic: > http://nginx.org/en/docs/http/ngx_http_core_module.html#chunked_transf > er_encoding > > Will that work here? > > That directive seems to be about nginx sending a response to a client, > and choosing not to send it "chunked" even though it normally would. > > This problem seems to be nginx reading a request from a client, where > the request is malformed. > > f > -- > Francis Daly francis at daoine.org > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx See my other reply; it does not work (we had already tried it before asking). Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267076,267086#msg-267086 From ahutchings at nginx.com Tue May 24 12:30:25 2016 From: ahutchings at nginx.com (Andrew Hutchings) Date: Tue, 24 May 2016 13:30:25 +0100 Subject: GET request with transfer-encoding causing issue In-Reply-To: <20160524122106.GO3477@daoine.org> References: <3f244049ca0c9dc44b3f0bda56c1e27d.NginxMailingListEnglish@forum.nginx.org> <20160524122106.GO3477@daoine.org> Message-ID: <4de5395d-d957-ffcf-22a3-14cea7d2083d@nginx.com> On 24/05/16 13:21, Francis Daly wrote: > On Tue, May 24, 2016 at 11:10:31AM +0100, Andrew Hutchings wrote: > > Hi there, > >> As if by magic: http://nginx.org/en/docs/http/ngx_http_core_module.html#chunked_transfer_encoding > > Will that work here? > > That directive seems to be about nginx sending a response to a client, > and choosing not to send it "chunked" even though it normally would. > > This problem seems to be nginx reading a request from a client, where > the request is malformed. Ah, good point. No, it won't help in this case. I'm not sure anything will without hacking code onto NGINX. It does require clients to somewhat meet HTTP protocol specifications unfortunately. Kind Regards -- Andrew Hutchings (LinuxJedi) Technical Product Manager, NGINX Inc. From nginx-forum at forum.nginx.org Tue May 24 14:17:55 2016 From: nginx-forum at forum.nginx.org (hbajaj2) Date: Tue, 24 May 2016 10:17:55 -0400 Subject: Proxy Pass | Upstream with query_string Message-ID: <7ee4b50f442fa5bb5e38d8eb13070e27.NginxMailingListEnglish@forum.nginx.org> I need to use proxy_pass directive for the upstream server which has query_string, which allows it to authenticate to the upstream server. How can i ensure that query string is pass-on with every request that goes via this reverse proxy. My simplified configuration is location /RetrieveProductWS { proxy_pass https://apiserver/secure/v1.0/retrieveProductConditions?client_id=111-aaaa-2bbb&client_secret=aaaaabbbbbbccccddd Please suggest possible way to achieve this requirement. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267095,267095#msg-267095 From mdounin at mdounin.ru Tue May 24 16:26:47 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 May 2016 19:26:47 +0300 Subject: nginx-1.11.0 Message-ID: <20160524162646.GA36620@mdounin.ru> Changes with nginx 1.11.0 24 May 2016 *) Feature: the "transparent" parameter of the "proxy_bind", "fastcgi_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. *) Feature: the $request_id variable. *) Feature: the "map" directive supports combinations of multiple variables as resulting values. *) Feature: now nginx checks if EPOLLRDHUP events are supported by kernel, and optimizes connection handling accordingly if the "epoll" method is used. *) Feature: the "ssl_certificate" and "ssl_certificate_key" directives can be specified multiple times to load certificates of different types (for example, RSA and ECDSA). *) Feature: the "ssl_ecdh_curve" directive now allows specifying a list of curves when using OpenSSL 1.0.2 or newer; by default a list built into OpenSSL is used. *) Change: to use DHE ciphers it is now required to specify parameters using the "ssl_dhparam" directive. *) Feature: the $proxy_protocol_port variable. *) Feature: the $realip_remote_port variable in the ngx_http_realip_module. *) Feature: the ngx_http_realip_module is now able to set the client port in addition to the address. *) Change: the "421 Misdirected Request" response now used when rejecting requests to a virtual server different from one negotiated during an SSL handshake; this improves interoperability with some HTTP/2 clients when using client certificates. *) Change: HTTP/2 clients can now start sending request body immediately; the "http2_body_preread_size" directive controls size of the buffer used before nginx will start reading client request body. *) Bugfix: cached error responses were not updated when using the "proxy_cache_bypass" directive. -- Maxim Dounin http://nginx.org/ From francis at daoine.org Tue May 24 16:44:33 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 24 May 2016 17:44:33 +0100 Subject: Proxy Pass | Upstream with query_string In-Reply-To: <7ee4b50f442fa5bb5e38d8eb13070e27.NginxMailingListEnglish@forum.nginx.org> References: <7ee4b50f442fa5bb5e38d8eb13070e27.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160524164433.GP3477@daoine.org> On Tue, May 24, 2016 at 10:17:55AM -0400, hbajaj2 wrote: Hi there, > I need to use proxy_pass directive for the upstream server which has > query_string, which allows it to authenticate to the upstream server. How > can i ensure that query string is pass-on with every request that goes via > this reverse proxy. proxy_pass (http://nginx.org/r/proxy_pass) allows you to build the host, port and uri to request, using variables. Probably you can use that to arrange what you want. What request does the client send to nginx? What request should nginx send to its proxy_pass upstream? When you know that, it may be more obvious what proxy_pass configuration to use. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue May 24 17:15:54 2016 From: nginx-forum at forum.nginx.org (hheiko) Date: Tue, 24 May 2016 13:15:54 -0400 Subject: upstream server temporarily disabled while reading response header from upstream Message-ID: We are runnging nginx reverse proxy on windows, the upstream consists of two lunixes based Apache/PHP Backend Servers. There is only one PHP application to be proxied. After starting nginx everything works fine, but then the backends become partly unresponsive and nginx is logging 2016/05/24 18:42:27 [warn] 5116#4660: *8863 upstream server temporarily disabled while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET / HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/", host: "wahl2.hannover-stadt.de" 2016/05/24 18:42:27 [error] 5116#4660: *8863 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET / HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/", host: "wahl2.hannover-stadt.de" 2016/05/24 18:42:36 [warn] 5116#4660: *8862 upstream server temporarily disabled while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET / HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/", host: "wahl2.hannover-stadt.de" 2016/05/24 18:42:36 [error] 5116#4660: *8862 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET / HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/", host: "wahl2.hannover-stadt.de" 2016/05/24 18:42:37 [warn] 3560#5948: *8870 upstream server temporarily disabled while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET /modules/setflashsession.php?set=0 HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/modules/setflashsession.php?set=0", host: "wahl2.hannover-stadt.de", referrer: "http://wahl2.hannover-stadt.de/" 2016/05/24 18:42:37 [error] 3560#5948: *8870 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET /modules/setflashsession.php?set=0 HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/modules/setflashsession.php?set=0", host: "wahl2.hannover-stadt.de", referrer: "http://wahl2.hannover-stadt.de/" 2016/05/24 18:42:47 [warn] 3560#5948: *8870 upstream server temporarily disabled while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET /modules/setflashsession.php?set=0 HTTP/1.1", upstream: "http://192.168.57.88:80/wrs/modules/setflashsession.php?set=0", host: "wahl2.hannover-stadt.de", referrer: "http://wahl2.hannover-stadt.de/" 2016/05/24 18:42:47 [error] 3560#5948: *8870 upstream timed out (10060: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond) while reading response header from upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: "GET /modules/setflashsession.php?set=0 HTTP/1.1", upstream: "http://192.168.57.88:80/wrs/modules/setflashsession.php?set=0", host: "wahl2.hannover-stadt.de", referrer: "http://wahl2.hannover-stadt.de/" Googeling for " upstream server temporarily disabled while reading response header from upstream" brings up a few russian links but no clue whats the reason for this. There is no load on the system, both backend servers and nginx are idle. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267117,267117#msg-267117 From mdounin at mdounin.ru Tue May 24 17:39:00 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 24 May 2016 20:39:00 +0300 Subject: upstream server temporarily disabled while reading response header from upstream In-Reply-To: References: Message-ID: <20160524173900.GE36620@mdounin.ru> Hello! On Tue, May 24, 2016 at 01:15:54PM -0400, hheiko wrote: > We are runnging nginx reverse proxy on windows, the upstream consists of two > lunixes based Apache/PHP Backend Servers. There is only one PHP application > to be proxied. After starting nginx everything works fine, but then the > backends become partly unresponsive and nginx is logging > > 2016/05/24 18:42:27 [warn] 5116#4660: *8863 upstream server temporarily > disabled while reading response header from upstream, client: 77.23.138.88, > server: wahl2.hannover-stadt.de, request: "GET / HTTP/1.1", upstream: > "http://192.168.57.14:80/wrs/", host: "wahl2.hannover-stadt.de" > 2016/05/24 18:42:27 [error] 5116#4660: *8863 upstream timed out (10060: A > connection attempt failed because the connected party did not properly > respond after a period of time, or established connection failed because > connected host has failed to respond) while reading response header from > upstream, client: 77.23.138.88, server: wahl2.hannover-stadt.de, request: > "GET / HTTP/1.1", upstream: "http://192.168.57.14:80/wrs/", host: > "wahl2.hannover-stadt.de" [...] > Googeling for " upstream server temporarily disabled while reading response > header from upstream" brings up a few russian links but no clue whats the > reason for this. There is no load on the system, both backend servers and > nginx are idle. The "upstream server temporarily disabled" warning means exactly this: the server was disabled due to failures, and will not be considered for balancing for some time. This warning is logged in addition to normal logging of errors ("upstream timed out" in your case), and means that number of errors observed by nginx crossed the max_fails threshold. More details on max_fails / fail_timeout can be found here: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#max_fails -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue May 24 18:00:02 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 24 May 2016 14:00:02 -0400 Subject: No subject Message-ID: Hello Nginx users, Now available: Nginx 1.11.0 for Windows https://kevinworthington.com/nginxwin1110 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, May 24, 2016 at 12:26 PM, Maxim Dounin wrote: > Changes with nginx 1.11.0 24 May > 2016 > > *) Feature: the "transparent" parameter of the "proxy_bind", > "fastcgi_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" > directives. > > *) Feature: the $request_id variable. > > *) Feature: the "map" directive supports combinations of multiple > variables as resulting values. > > *) Feature: now nginx checks if EPOLLRDHUP events are supported by > kernel, and optimizes connection handling accordingly if the "epoll" > method is used. > > *) Feature: the "ssl_certificate" and "ssl_certificate_key" directives > can be specified multiple times to load certificates of different > types (for example, RSA and ECDSA). > > *) Feature: the "ssl_ecdh_curve" directive now allows specifying a list > of curves when using OpenSSL 1.0.2 or newer; by default a list built > into OpenSSL is used. > > *) Change: to use DHE ciphers it is now required to specify parameters > using the "ssl_dhparam" directive. > > *) Feature: the $proxy_protocol_port variable. > > *) Feature: the $realip_remote_port variable in the > ngx_http_realip_module. > > *) Feature: the ngx_http_realip_module is now able to set the client > port in addition to the address. > > *) Change: the "421 Misdirected Request" response now used when > rejecting requests to a virtual server different from one negotiated > during an SSL handshake; this improves interoperability with some > HTTP/2 clients when using client certificates. > > *) Change: HTTP/2 clients can now start sending request body > immediately; the "http2_body_preread_size" directive controls size > of > the buffer used before nginx will start reading client request body. > > *) Bugfix: cached error responses were not updated when using the > "proxy_cache_bypass" directive. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 24 21:19:16 2016 From: nginx-forum at forum.nginx.org (George) Date: Tue, 24 May 2016 17:19:16 -0400 Subject: nginx-1.11.0 In-Reply-To: <20160524162646.GA36620@mdounin.ru> References: <20160524162646.GA36620@mdounin.ru> Message-ID: nice RSA + ECDSA certs support ! what's the recommended way to setup HTTP Public Key Pinning with regards to dual SSL certificates ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267113,267124#msg-267124 From lexey.genus at gmail.com Wed May 25 09:01:24 2016 From: lexey.genus at gmail.com (Alexey Genus) Date: Wed, 25 May 2016 12:01:24 +0300 Subject: nginx-1.11.0 In-Reply-To: References: <20160524162646.GA36620@mdounin.ru> Message-ID: > *) Feature: the "map" directive supports combinations of multiple variables as resulting values. Does this mean this ticket can be resolved? https://trac.nginx.org/nginx/ticket/663 On Wed, May 25, 2016 at 12:19 AM, George wrote: > nice RSA + ECDSA certs support ! > > what's the recommended way to setup HTTP Public Key Pinning with regards to > dual SSL certificates ? > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,267113,267124#msg-267124 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 25 12:15:49 2016 From: nginx-forum at forum.nginx.org (vedranf) Date: Wed, 25 May 2016 08:15:49 -0400 Subject: Possible cached file corruption with aio_write enabled Message-ID: <014a00c69f9f493e1d0929eb0f801128.NginxMailingListEnglish@forum.nginx.org> Hello, I've recently upgraded one of the nginx servers within a caching (proxy_cache module) cluster from 1.8.1 to 1.10 and soon after I noticed unusually high number of various errors only on that server which I eventually pin pointed to a mismatch between the actual cached file size on disk and size reported in file metadata (either content-length or something else). Apparently, cached files on 1.10 are cut short and miss up to few hundred kilobytes at their ends (for files over 100 MB in total size). For a few files I checked, the larger file was, more content it missed at the end. Eventually I found the culprit to be aio_write which I enabled the same time I upgraded nginx. Disabling it and removing all already cached files resolved the problem. Relevant directives: thread_pool default threads=4 max_queue=65536; sendfile on; aio threads=default; aio_write on; # output_buffers are used if sendfile is not used # output_buffers 8 512k; read_ahead 1; proxy_cache_path /cache ... use_temp_path=on; proxy_buffering on; proxy_buffers 32 64k; proxy_busy_buffers_size 128k; proxy_temp_file_write_size 128k; OS is Linux 4.1. Thanks, Vedran Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267142,267142#msg-267142 From xeioex at nginx.com Wed May 25 12:39:52 2016 From: xeioex at nginx.com (Dmitry Volyntsev) Date: Wed, 25 May 2016 15:39:52 +0300 Subject: nginx-1.11.0 In-Reply-To: References: <20160524162646.GA36620@mdounin.ru> Message-ID: <57459D18.6090505@nginx.com> On 25.05.2016 12:01, Alexey Genus wrote: > > *) Feature: the "map" directive supports combinations of multiple > variables as resulting values. > Does this mean this ticket can be resolved? > https://trac.nginx.org/nginx/ticket/663 Yes, it is resolved. From nginx-forum at forum.nginx.org Wed May 25 14:06:56 2016 From: nginx-forum at forum.nginx.org (ohenley) Date: Wed, 25 May 2016 10:06:56 -0400 Subject: CPU load monitoring / dynamically limit number of connections to server In-Reply-To: <20160522013842.5468245.57643.3328@lazygranch.com> References: <20160522013842.5468245.57643.3328@lazygranch.com> Message-ID: Thx gariac. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267003,267145#msg-267145 From michelle at nginx.com Wed May 25 14:41:06 2016 From: michelle at nginx.com (Michelle Brinich) Date: Wed, 25 May 2016 07:41:06 -0700 Subject: 2016 NGINX User Survey: Help Us Shape the Future Message-ID: Hello! It?s that time of year again for the annual NGINX User Survey. We want to hear about your experiences with NGINX today, and your feedback on how NGINX and our products can continue to evolve. Please take a moment to complete the 2016 NGINX User Survey[1]. It will remain open through June 8, 2016. [1] http://survey.newkind.com/r/AShdWE9g < http://survey.newkind.com/r/AShdWE9g> Michelle (On behalf of the entire NGINX team) -- Michelle Brinich http://nginx.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed May 25 20:02:46 2016 From: nginx-forum at forum.nginx.org (jrhodes) Date: Wed, 25 May 2016 16:02:46 -0400 Subject: Sending basic auth to "backend" servers Message-ID: Hey everyone I'm trying to achieve something a little unique (have read a LOT of documentation and posts) I want to use ngnix as a LB to a handful of squid servers I have to distribute http requests on a round robin basis. Each squid web server in the "backend" accepts a unique username and password. Would anyone be able to point me in the right direction config wise on how to define a unique user/pass for each server in the load balanced pool? Many thanks JR Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267156,267156#msg-267156 From nginx-forum at forum.nginx.org Thu May 26 11:04:47 2016 From: nginx-forum at forum.nginx.org (Dimka) Date: Thu, 26 May 2016 07:04:47 -0400 Subject: Nginx + lua-nginx, get ssl_session_id In-Reply-To: References: Message-ID: <9f68b76e4bcd9308206d3a21bc2e42df.NginxMailingListEnglish@forum.nginx.org> Face the same problem, empty $ssl_session_id variable. If ssl_session_ticket off, $ssl_session_id always contain ID. How can I identify client connection if no ssl session id available? I need "session" IDentificator to pass it to backend, which authorize requests by this ID. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255157,267164#msg-267164 From aapo.talvensaari at gmail.com Thu May 26 12:53:24 2016 From: aapo.talvensaari at gmail.com (Aapo Talvensaari) Date: Thu, 26 May 2016 15:53:24 +0300 Subject: Nginx + lua-nginx, get ssl_session_id In-Reply-To: <9f68b76e4bcd9308206d3a21bc2e42df.NginxMailingListEnglish@forum.nginx.org> References: <9f68b76e4bcd9308206d3a21bc2e42df.NginxMailingListEnglish@forum.nginx.org> Message-ID: On 26 May 2016 at 14:04, Dimka wrote: > Face the same problem, empty $ssl_session_id variable. > > If ssl_session_ticket off, $ssl_session_id always contain ID. > > How can I identify client connection if no ssl session id available? > I would also know answer for this. It is kinda either: 1. server and client store session id or 2. client sends tickets that contains info how to continue without server storing anything So with tickets you don't have session id. I'm not sure could there still be some kind of identifier be counted from tickets that could be used as a identifier for multiple round trips, but I think not (?). Yes, it sucks a little bit. With session_id we had nice way to identify client built-in the protocol, with tickets it seems we don't. Regards Aapo -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu May 26 13:25:21 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 26 May 2016 16:25:21 +0300 Subject: Sending basic auth to "backend" servers In-Reply-To: References: Message-ID: <20160526132520.GH36620@mdounin.ru> Hello! On Wed, May 25, 2016 at 04:02:46PM -0400, jrhodes wrote: > Hey everyone > > I'm trying to achieve something a little unique (have read a LOT of > documentation and posts) > > I want to use ngnix as a LB to a handful of squid servers I have to > distribute http requests on a round robin basis. > > Each squid web server in the "backend" accepts a unique username and > password. > > Would anyone be able to point me in the right direction config wise on how > to define a unique user/pass for each server in the load balanced pool? Normal logic of load balancing in nginx assumes the same request can be sent to multiple backend servers, and therefore it won't work for you - as you need to construct unique request to each backend, with it's own Authorization header. To do what you want you may try distributing requests "by hand", e.g., using the split_clients module: split_client $remote_addr $backend { 50% backend1.example.com; * backend2.example.com; } map $backend $backend_auth { backend1.example.com QWxhZGRpbjpvcGVuIHNlc2FtZQ==; backend2.example.com QWxhZGRpbjpvcGVuIHNlc2FtZQ==; } server { ... location / { proxy_pass http://$backend; proxy_set_header Authorization "Basic $backend_auth"; } } More information can be found in the documentation here: http://nginx.org/en/docs/http/ngx_http_split_clients_module.html http://nginx.org/en/docs/http/ngx_http_map_module.html http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_pass -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Thu May 26 13:46:28 2016 From: nginx-forum at forum.nginx.org (Dimka) Date: Thu, 26 May 2016 09:46:28 -0400 Subject: Nginx + lua-nginx, get ssl_session_id In-Reply-To: References: Message-ID: <92b26bc7c9611ff1fee3f43456da0543.NginxMailingListEnglish@forum.nginx.org> Yes, impossible. In my case, I can use remoteIP/Port, I think its enough to ident. I need it to prevent extra DB queries which done with first request. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,255157,267174#msg-267174 From nginx-forum at forum.nginx.org Thu May 26 19:07:09 2016 From: nginx-forum at forum.nginx.org (jrhodes) Date: Thu, 26 May 2016 15:07:09 -0400 Subject: Sending basic auth to "backend" servers In-Reply-To: <20160526132520.GH36620@mdounin.ru> References: <20160526132520.GH36620@mdounin.ru> Message-ID: Maxim, thanks so much. I'll read those parts! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267156,267184#msg-267184 From nginx-forum at forum.nginx.org Fri May 27 07:15:25 2016 From: nginx-forum at forum.nginx.org (onecrazymonkey) Date: Fri, 27 May 2016 03:15:25 -0400 Subject: Normal memory usage for SSL terminating, reverse proxy nginx? Message-ID: It has been a difficult topic to research. The nginx instance is doing nothing more than what the subject stated. It reverse proxies to a backend, load balanced set of web app instances and terminates SSL for a large number of unique domains each with their own SSL cert. Here's a `ps aux` of nginx running after a clean start and zero (out of rotation) traffic. root 20 0 676052 598224 1848 S 0.0 16.5 0:00.06 nginx nginx 20 0 675552 597204 1228 S 0.0 16.5 0:00.44 nginx nginx 20 0 675552 596612 636 S 0.0 16.5 0:00.36 nginx Looking at that process list, nginx is using about 676mb of RAM for ~400 vhosts each with their own unique SSL cert for a unique domain. Here's an example of a vhost server config. They're all generated based on the same base template: server { listen 443 ssl proxy_protocol; server_name www.; access_log /var/log/nginx/access_vhost_443.log accesslog; error_log /var/log/nginx/error_vhost_443.log warn; real_ip_header proxy_protocol; ssl on; ssl_certificate /etc/nginx/ssl//.crt; ssl_certificate_key /etc/nginx/ssl//.key; ssl_stapling on; ssl_stapling_verify on; resolver internal-dns.vpc valid=60s; set $internal "upstream-load-balancer.vpc"; location / { if ($denied) { return 444; } proxy_pass http://$internal; } } Now, this wouldn't be all that bad. 1.69mb of memory per vhost isn't horrible, high, but not unsustainable. However, if I do `nginx -s reload` or restart via systemd service... root 20 0 1370188 1.176g 3240 S 0.0 33.4 0:14.98 nginx nginx 20 0 1370192 1.175g 2584 S 0.3 33.4 2:39.95 nginx nginx 20 0 1370192 1.175g 2584 S 1.7 33.4 2:28.42 nginx It doubles the memory consumption! It never goes up or down drastically again. It's as if it duplicates and never frees or releases unless you do a restart. This was tested on a handful of AWS EC2 instance types using vanilla Centos7 and both nginx 1.6.3 (stable in centos repos) and nginx 1.10.0 (nginx.org repo). In summary, my questions are thus: - Is it normal for nginx to use ~1.7mb per SSL vhost? - Is there a way to reduce that memory usage? - Am I the only one that experiences the doubling of nginx memory usage after a nginx reload? - Is that a bug? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267189,267189#msg-267189 From mdounin at mdounin.ru Fri May 27 13:58:18 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 27 May 2016 16:58:18 +0300 Subject: Normal memory usage for SSL terminating, reverse proxy nginx? In-Reply-To: References: Message-ID: <20160527135818.GP36620@mdounin.ru> Hello! On Fri, May 27, 2016 at 03:15:25AM -0400, onecrazymonkey wrote: > It has been a difficult topic to research. The nginx instance is doing > nothing more than what the subject stated. It reverse proxies to a backend, > load balanced set of web app instances and terminates SSL for a large number > of unique domains each with their own SSL cert. Here's a `ps aux` of nginx > running after a clean start and zero (out of rotation) traffic. > > root 20 0 676052 598224 1848 S 0.0 16.5 0:00.06 nginx > > nginx 20 0 675552 597204 1228 S 0.0 16.5 0:00.44 nginx > > nginx 20 0 675552 596612 636 S 0.0 16.5 0:00.36 nginx > > Looking at that process list, nginx is using about 676mb of RAM for ~400 > vhosts each with their own unique SSL cert for a unique domain. Here's an > example of a vhost server config. They're all generated based on the same > base template: > > server { > listen 443 ssl proxy_protocol; > server_name www.; > access_log /var/log/nginx/access_vhost_443.log accesslog; > error_log /var/log/nginx/error_vhost_443.log warn; > real_ip_header proxy_protocol; > > ssl on; > ssl_certificate /etc/nginx/ssl//.crt; > ssl_certificate_key /etc/nginx/ssl//.key; > > ssl_stapling on; > ssl_stapling_verify on; > > resolver internal-dns.vpc valid=60s; Note: by using using own resolver in each server{} you add more flexibility, but waste memory and other resources. Consider using single resolver defined at http{} level instead. > set $internal "upstream-load-balancer.vpc"; > location / { > if ($denied) { > return 444; > } > proxy_pass http://$internal; > } > } > > Now, this wouldn't be all that bad. 1.69mb of memory per vhost isn't > horrible, high, but not unsustainable. However, if I do `nginx -s reload` or > restart via systemd service... > > root 20 0 1370188 1.176g 3240 S 0.0 33.4 0:14.98 nginx > > nginx 20 0 1370192 1.175g 2584 S 0.3 33.4 2:39.95 nginx > > nginx 20 0 1370192 1.175g 2584 S 1.7 33.4 2:28.42 nginx > > It doubles the memory consumption! It never goes up or down drastically > again. It's as if it duplicates and never frees or releases unless you do a > restart. As nginx allocates a new configuration before releasing the old one, and this doubles memory consumption of the master process for a while. The old configuration is then freed, but in many cases this memory isn't really returned to kernel by the allocator, and hence seen in various stats. > This was tested on a handful of AWS EC2 instance types using vanilla Centos7 > and both nginx 1.6.3 (stable in centos repos) and nginx 1.10.0 (nginx.org > repo). > > In summary, my questions are thus: > - Is it normal for nginx to use ~1.7mb per SSL vhost? Not really. This is likely due to a particular configuration you are using: own resolver in each vhost, own logs, etc. Consider moving common settings to the http{} level, it should somewhat reduce memory consumption. Just for the reference: a simple test with 10k SSL vhosts like: server { listen 8443; ssl on; server_name 1.foo; ssl_certificate test.crt; ssl_certificate_key test.key; } server { listen 8443; ssl on; server_name 2.foo; ssl_certificate test.crt; ssl_certificate_key test.key; } ... server { listen 8443; ssl on; server_name 10000.foo; ssl_certificate test.crt; ssl_certificate_key test.key; } takes about 200mb of memory (on a 32-bit host). That is, about 20k of memory per vhost. With more options it will take more memory, but 1.7mb is a bit too many - you are doing something wrong. Just a guess: if you use ssl_trusted_certificate at http{} level, make sure you are loading only needed certificates, not a full OS-provided bundle. > - Is there a way to reduce that memory usage? Yes, see various hints above. In summary: configure things at http{} level where possible, avoid memory-hungry things you don't need. > - Am I the only one that experiences the doubling of nginx memory usage > after a nginx reload? > - Is that a bug? No and no, see above. -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Fri May 27 14:14:27 2016 From: nginx-forum at forum.nginx.org (jrhodes) Date: Fri, 27 May 2016 10:14:27 -0400 Subject: Sending basic auth to "backend" servers In-Reply-To: <20160526132520.GH36620@mdounin.ru> References: <20160526132520.GH36620@mdounin.ru> Message-ID: <569b28768f6663f04f2b5813fbab472d.NginxMailingListEnglish@forum.nginx.org> Just one follow up question after some testing. Is there a way to split on a per request basis? So say 10 backend servers each with unique authorisation headers set up, then have each single incoming request RR to each: Incoming http request 1 -> backend1 Incoming http request 2 -> backend2 Incoming http request 3 -> backend3 And so on... Thanks! JR Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267156,267197#msg-267197 From nginx-forum at forum.nginx.org Sat May 28 19:24:00 2016 From: nginx-forum at forum.nginx.org (rzmpl) Date: Sat, 28 May 2016 15:24:00 -0400 Subject: HTTP not working / downloads 57 byte small file Message-ID: <866a9ab31ae0ffa8cddbee245bb1a92a.NginxMailingListEnglish@forum.nginx.org> Hello, like mentioned in the title HTTP isn't working at all anymore. HTTPS works fine. I already tried default vhost config but no luck there. I have no idea what is going on and I didn't even find anything with google either. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267210,267210#msg-267210 From nginx-forum at forum.nginx.org Sat May 28 19:32:59 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sat, 28 May 2016 15:32:59 -0400 Subject: Return default placeholder image when image file on server not found Message-ID: <5cf36d234486dd327a721b767542540c.NginxMailingListEnglish@forum.nginx.org> What is the right way to convert this from .htaccess to nginx.conf.. ##Return default placeholder image when image file on server not found RewriteCond %{DOCUMENT_ROOT}%{REQUEST_URI} !-f RewriteRule \.(gif|jpe?g|png) /image404.php [NC,L] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267211,267211#msg-267211 From larry.martell at gmail.com Sat May 28 19:48:30 2016 From: larry.martell at gmail.com (Larry Martell) Date: Sat, 28 May 2016 15:48:30 -0400 Subject: checking headers Message-ID: Is there any way with nginx to check a request's headers and send back a 401 if the headers are not proper? From nginx-forum at forum.nginx.org Sun May 29 07:40:22 2016 From: nginx-forum at forum.nginx.org (JoakimR) Date: Sun, 29 May 2016 03:40:22 -0400 Subject: Return default placeholder image when image file on server not found In-Reply-To: <5cf36d234486dd327a721b767542540c.NginxMailingListEnglish@forum.nginx.org> References: <5cf36d234486dd327a721b767542540c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6139923d0dd0d10ace9452343a7353c7.NginxMailingListEnglish@forum.nginx.org> Found the answer on IRC #Freenode #nginx by catbeard location ~ \.(png|jp?g|gif)$ { error_page 404 /404.png; } http://serverfault.com/a/481612 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267211,267213#msg-267213 From reallfqq-nginx at yahoo.fr Sun May 29 18:52:54 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Sun, 29 May 2016 20:52:54 +0200 Subject: HTTP not working / downloads 57 byte small file In-Reply-To: <866a9ab31ae0ffa8cddbee245bb1a92a.NginxMailingListEnglish@forum.nginx.org> References: <866a9ab31ae0ffa8cddbee245bb1a92a.NginxMailingListEnglish@forum.nginx.org> Message-ID: There is little to nothing anyone can do with such a message. Details? Version? Configuration? Leanest example possible to reproduce the problem? Anything allowing people to help you... and encouraging them to get the the will to do so. http://www.catb.org/esr/faqs/smart-questions.html --- *B. R.* On Sat, May 28, 2016 at 9:24 PM, rzmpl wrote: > Hello, like mentioned in the title HTTP isn't working at all anymore. HTTPS > works fine. I already tried default vhost config but no luck there. I have > no idea what is going on and I didn't even find anything with google > either. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,267210,267210#msg-267210 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon May 30 05:52:40 2016 From: nginx-forum at forum.nginx.org (yogeshorai) Date: Mon, 30 May 2016 01:52:40 -0400 Subject: proxy_cache_lock stat under log Message-ID: <3deb288604ea1e19c3634751ba720797.NginxMailingListEnglish@forum.nginx.org> Hi, We are using proxy_cache_lock feature of nginx and have set proxy_cache_lock_age and proxy_cache_lock_timeout attributes. To monitor and get overall stats we are looking for nginx log generated and any option to add some proxy cache lock based numbers to log entries. Some info about request being on hold or time wait under proxy lock criteria and time spent for the same. Any input will be really appreciated. Regards, Yogesh Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267217,267217#msg-267217 From nginx-forum at forum.nginx.org Mon May 30 11:06:16 2016 From: nginx-forum at forum.nginx.org (smaer780) Date: Mon, 30 May 2016 07:06:16 -0400 Subject: Feature Request: write error logs when detecting duplicate http headers In-Reply-To: References: Message-ID: <28065df32f73e8481ba05acde34c5020.NginxMailingListEnglish@forum.nginx.org> thanks,recently i ran into this problem I saw you this post, help me with this problem Posted at Nginx Forum: https://forum.nginx.org/read.php?2,204689,267222#msg-267222 From mdounin at mdounin.ru Mon May 30 14:07:30 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 30 May 2016 17:07:30 +0300 Subject: HTTP not working / downloads 57 byte small file In-Reply-To: <866a9ab31ae0ffa8cddbee245bb1a92a.NginxMailingListEnglish@forum.nginx.org> References: <866a9ab31ae0ffa8cddbee245bb1a92a.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20160530140730.GA36620@mdounin.ru> Hello! On Sat, May 28, 2016 at 03:24:00PM -0400, rzmpl wrote: > Hello, like mentioned in the title HTTP isn't working at all anymore. HTTPS > works fine. I already tried default vhost config but no luck there. I have > no idea what is going on and I didn't even find anything with google either. Most likely you've did something like listen 80 http2; in your configuration, which results in HTTP/2 being used on port 80, assuming prior knowledge. Don't do this. See https://trac.nginx.org/nginx/ticket/983 for an example of another person who did the same mistake. May be we need some better indication on what goes wrong here. I.e., something similar to 497 code we use internally for https ("a regular request has been sent to the HTTPS port"). -- Maxim Dounin http://nginx.org/ From nginx-forum at forum.nginx.org Mon May 30 15:10:58 2016 From: nginx-forum at forum.nginx.org (gacek) Date: Mon, 30 May 2016 11:10:58 -0400 Subject: alias to server_ip/drupal Message-ID: <6ad8f0046f5f52f1213aa17ae332f761.NginxMailingListEnglish@forum.nginx.org> Hello, i am rather new to the nginx and i have a trobule with alias from my default page (http://server_IP/) to the Drupal (http://server_IP/drupal). I got the error message: 404 Not Found. I have no idea what is wrong and i didn't find anything with google either. Thanks for your help. These are my configs: nginx.conf user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/mime.types; default_type application/octet-stream; # Load modular configuration files from the /etc/nginx/conf.d directory. # See http://nginx.org/en/docs/ngx_core_module.html#include # for more information. include /etc/nginx/conf.d/*.conf; } --------------------------------------------------------- default.conf server { server_name 46.101.231.239; root /usr/share/nginx/html; index index.php index.html index.htm; location / { alias /drupal/; try_files $uri $uri/ =404; } error_page 404 /404.html; error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } location ~ \.php$ { try_files $uri =404; fastcgi_pass unix:/var/run/php-fpm/php-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } --------------------------------------------------------- Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267232,267232#msg-267232 From rpaprocki at fearnothingproductions.net Mon May 30 18:19:29 2016 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Mon, 30 May 2016 11:19:29 -0700 Subject: checking headers In-Reply-To: References: Message-ID: On Sat, May 28, 2016 at 12:48 PM, Larry Martell wrote: > Is there any way with nginx to check a request's headers and send back > a 401 if the headers are not proper? > Yes, you can do with this via the 'map' and 'if' directives. A trivial example: http { # if the "X-Foo" request header contains the phrase 'data', set $bar to 1; otherwise, set it to 0 map $http_x_foo $bar { default 0; "~data" 1; } server { location /t { if ($bar) { return 401; } } } See also http://nginx.org/en/docs/http/ngx_http_map_module.html and http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue May 31 09:11:05 2016 From: nginx-forum at forum.nginx.org (alajl) Date: Tue, 31 May 2016 05:11:05 -0400 Subject: forward data from orginal IP to a new IP Message-ID: <851da72450071be0bf25e0df942c7631.NginxMailingListEnglish@forum.nginx.org> I have this configure file, but it is long-winded? in nginx, are there having one expression to handle it server { listen 192.168.1.2:10000; proxy_pass 192.168.0.3:10000; } server { listen 192.168.1.2:10001; proxy_pass 192.168.0.3:10001; } server { listen 192.168.1.2:10002; proxy_pass 192.168.0.3:10002; } server { listen 192.168.1.2:10003; proxy_pass 192.168.0.3:10003; } server { listen 192.168.1.2:10004; proxy_pass 192.168.0.3:10004; } server { listen 192.168.1.2:10005; proxy_pass 192.168.0.3:10005; } server { listen 192.168.1.2:10006; proxy_pass 192.168.0.3:10006; } server { listen 192.168.1.2:10007; proxy_pass 192.168.0.3:10007; } server { listen 192.168.1.2:10008; proxy_pass 192.168.0.3:10008; } server { listen 192.168.1.2:10009; proxy_pass 192.168.0.3:10009; } server { listen 192.168.1.2:10010; proxy_pass 192.168.0.4:10010; } server { listen 192.168.1.2:10011; proxy_pass 192.168.0.4:10011; } server { listen 192.168.1.2:10012; proxy_pass 192.168.0.4:10012; } server { listen 192.168.1.2:10013; proxy_pass 192.168.0.4:10013; } server { listen 192.168.1.2:10014; proxy_pass 192.168.0.4:10014; } server { listen 192.168.1.2:10015; proxy_pass 192.168.0.4:10015; } server { listen 192.168.1.2:10016; proxy_pass 192.168.0.4:10016; } server { listen 192.168.1.2:10017; proxy_pass 192.168.0.4:10017; } server { listen 192.168.1.2:10018; proxy_pass 192.168.0.4:10018; } server { listen 192.168.1.2:10019; proxy_pass 192.168.0.4:10019; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267238,267238#msg-267238 From nginx-forum at forum.nginx.org Tue May 31 11:12:20 2016 From: nginx-forum at forum.nginx.org (mastercan) Date: Tue, 31 May 2016 07:12:20 -0400 Subject: Multi certificate support returns Letsencrypt Intermediate Certificate twice Message-ID: Hello folks, I have the following setup: Nginx 1.11.0 Libressl 2.3.4 1 Letsencrypt RSA 2048 certificate 1 Letsencrypt ECDSA p256 certificate The certificate files are both chained. Both have the Letsencrypt RSA 2048 X3 intermediate certificate at the end of the file. The problem is: Nginx returns this intermediate certificate twice when connecting via https. Regardless whether you connect via RSA client or ECDSA client. Is this a bug? Or a configuration issue? Thank you in advance! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267240,267240#msg-267240 From larry.martell at gmail.com Tue May 31 11:41:06 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 31 May 2016 07:41:06 -0400 Subject: checking headers In-Reply-To: References: Message-ID: On Mon, May 30, 2016 at 2:19 PM, Robert Paprocki wrote: > > > On Sat, May 28, 2016 at 12:48 PM, Larry Martell > wrote: >> >> Is there any way with nginx to check a request's headers and send back >> a 401 if the headers are not proper? > > > > Yes, you can do with this via the 'map' and 'if' directives. A trivial > example: > > http { > # if the "X-Foo" request header contains the phrase 'data', set $bar > to 1; otherwise, set it to 0 > map $http_x_foo $bar { > default 0; > "~data" 1; > } > > server { > location /t { > if ($bar) { > return 401; > } > } > } > > See also http://nginx.org/en/docs/http/ngx_http_map_module.html and > http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if I added this to the http section: map $http_x_capdata_auth $not_auth { default 1; "authorized" 0; } and this to the location section: if ($not_auth) { return 401; } and it's always returning a 401, even if there is a header: X-Capdata-Auth: authorized And I doing something wrong here? How can I debug this? From larry.martell at gmail.com Tue May 31 11:55:38 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 31 May 2016 07:55:38 -0400 Subject: checking headers In-Reply-To: References: Message-ID: On Tue, May 31, 2016 at 7:41 AM, Larry Martell wrote: > On Mon, May 30, 2016 at 2:19 PM, Robert Paprocki > wrote: >> >> >> On Sat, May 28, 2016 at 12:48 PM, Larry Martell >> wrote: >>> >>> Is there any way with nginx to check a request's headers and send back >>> a 401 if the headers are not proper? >> >> >> >> Yes, you can do with this via the 'map' and 'if' directives. A trivial >> example: >> >> http { >> # if the "X-Foo" request header contains the phrase 'data', set $bar >> to 1; otherwise, set it to 0 >> map $http_x_foo $bar { >> default 0; >> "~data" 1; >> } >> >> server { >> location /t { >> if ($bar) { >> return 401; >> } >> } >> } >> >> See also http://nginx.org/en/docs/http/ngx_http_map_module.html and >> http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if > > I added this to the http section: > > map $http_x_capdata_auth $not_auth { > default 1; > "authorized" 0; > } > > and this to the location section: > > if ($not_auth) { > return 401; > } > > and it's always returning a 401, even if there is a header: > > X-Capdata-Auth: authorized > > And I doing something wrong here? How can I debug this? Looking with tcpdump I do not see that header field set. The request is coming from a django app which is doing a redirect and I set the header before the redirect. Guess I have to debug from that side. From mdounin at mdounin.ru Tue May 31 13:18:16 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 May 2016 16:18:16 +0300 Subject: Multi certificate support returns Letsencrypt Intermediate Certificate twice In-Reply-To: References: Message-ID: <20160531131816.GO36620@mdounin.ru> Hello! On Tue, May 31, 2016 at 07:12:20AM -0400, mastercan wrote: > Hello folks, > > I have the following setup: > Nginx 1.11.0 > Libressl 2.3.4 > > 1 Letsencrypt RSA 2048 certificate > 1 Letsencrypt ECDSA p256 certificate > > The certificate files are both chained. Both have the Letsencrypt RSA 2048 > X3 intermediate certificate at the end of the file. > > The problem is: > Nginx returns this intermediate certificate twice when connecting via https. > Regardless whether you connect via RSA client or ECDSA client. > > Is this a bug? Or a configuration issue? Only OpenSSL 1.0.2 and higher support separate chains for different certificates. With older versions (including LibreSSL) there is only one chain for all certificates, and all chained certificates will be added to it. That is, if chains are the same you have to leave only one of them. -- Maxim Dounin http://nginx.org/ From larry.martell at gmail.com Tue May 31 13:23:36 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 31 May 2016 09:23:36 -0400 Subject: checking headers In-Reply-To: References: Message-ID: On Tue, May 31, 2016 at 7:55 AM, Larry Martell wrote: > On Tue, May 31, 2016 at 7:41 AM, Larry Martell wrote: >> On Mon, May 30, 2016 at 2:19 PM, Robert Paprocki >> wrote: >>> >>> >>> On Sat, May 28, 2016 at 12:48 PM, Larry Martell >>> wrote: >>>> >>>> Is there any way with nginx to check a request's headers and send back >>>> a 401 if the headers are not proper? >>> >>> >>> >>> Yes, you can do with this via the 'map' and 'if' directives. A trivial >>> example: >>> >>> http { >>> # if the "X-Foo" request header contains the phrase 'data', set $bar >>> to 1; otherwise, set it to 0 >>> map $http_x_foo $bar { >>> default 0; >>> "~data" 1; >>> } >>> >>> server { >>> location /t { >>> if ($bar) { >>> return 401; >>> } >>> } >>> } >>> >>> See also http://nginx.org/en/docs/http/ngx_http_map_module.html and >>> http://nginx.org/en/docs/http/ngx_http_rewrite_module.html#if >> >> I added this to the http section: >> >> map $http_x_capdata_auth $not_auth { >> default 1; >> "authorized" 0; >> } >> >> and this to the location section: >> >> if ($not_auth) { >> return 401; >> } >> >> and it's always returning a 401, even if there is a header: >> >> X-Capdata-Auth: authorized >> >> And I doing something wrong here? How can I debug this? > > Looking with tcpdump I do not see that header field set. The request > is coming from a django app which is doing a redirect and I set the > header before the redirect. Guess I have to debug from that side. I traced the django code all the way through to when the response is going out and I see this: (Pdb) response._headers {'x-capdata-auth': ('X-Capdata-Auth', 'authorized'), 'content-type': ('Content-Type', 'text/html; charset=utf-8'), 'location': ('Location', 'http://foo.bar.com:8000/workitem/12345'), 'vary': ('Vary', 'Cookie')} Any one have any ideas as to why it doesn't seem to make it over to nginx? From francis at daoine.org Tue May 31 13:45:57 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 31 May 2016 14:45:57 +0100 Subject: checking headers In-Reply-To: References: Message-ID: <20160531134557.GB2852@daoine.org> On Tue, May 31, 2016 at 09:23:36AM -0400, Larry Martell wrote: > On Tue, May 31, 2016 at 7:55 AM, Larry Martell wrote: > >>> On Sat, May 28, 2016 at 12:48 PM, Larry Martell > >>> wrote: Hi there, > >>>> Is there any way with nginx to check a request's headers and send back > >>>> a 401 if the headers are not proper? > > Looking with tcpdump I do not see that header field set. The request > > is coming from a django app which is doing a redirect and I set the > > header before the redirect. Guess I have to debug from that side. > > I traced the django code all the way through to when the response is > going out and I see this: > > (Pdb) response._headers > {'x-capdata-auth': ('X-Capdata-Auth', 'authorized'), 'content-type': > ('Content-Type', 'text/html; charset=utf-8'), 'location': ('Location', > 'http://foo.bar.com:8000/workitem/12345'), 'vary': ('Vary', 'Cookie')} > > Any one have any ideas as to why it doesn't seem to make it over to nginx? There is a request from the client to nginx. There is a response from nginx to the client. There can be a request from nginx to its upstream, and a response from upstream to nginx. Any of those requests and responses can include headers. In your architecture, what "header" do you care about? That should tell you which variable value to check. http://nginx.org/r/$http_ http://nginx.org/r/$sent_http_ http://nginx.org/r/$upstream_http_ are three different families of variables set within nginx. Possibly one of them covers what you want? f -- Francis Daly francis at daoine.org From sirtcp at gmail.com Tue May 31 14:04:30 2016 From: sirtcp at gmail.com (Muhammad Yousuf Khan) Date: Tue, 31 May 2016 19:04:30 +0500 Subject: Buitwith.com showing apache and nginx both. Message-ID: When i scan my site with builtwith.com it is showing that i am using both nginx and apache. however i have completely moved my site from apache to nginx. Though previously it was on Apache which i uninstalled after installing nginx. Now i am woundering why its showing both. i check netstat its showing nginx daemon owning port 80 and 443. I also used curl. it also showed nginx is serving the page. Any idea why? Any help will be highly appreciated. Thanks, Yousuf -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Tue May 31 14:26:26 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 31 May 2016 10:26:26 -0400 Subject: checking headers In-Reply-To: <20160531134557.GB2852@daoine.org> References: <20160531134557.GB2852@daoine.org> Message-ID: On Tue, May 31, 2016 at 9:45 AM, Francis Daly wrote: > On Tue, May 31, 2016 at 09:23:36AM -0400, Larry Martell wrote: >> On Tue, May 31, 2016 at 7:55 AM, Larry Martell wrote: >> >>> On Sat, May 28, 2016 at 12:48 PM, Larry Martell >> >>> wrote: > > Hi there, > >> >>>> Is there any way with nginx to check a request's headers and send back >> >>>> a 401 if the headers are not proper? > >> > Looking with tcpdump I do not see that header field set. The request >> > is coming from a django app which is doing a redirect and I set the >> > header before the redirect. Guess I have to debug from that side. >> >> I traced the django code all the way through to when the response is >> going out and I see this: >> >> (Pdb) response._headers >> {'x-capdata-auth': ('X-Capdata-Auth', 'authorized'), 'content-type': >> ('Content-Type', 'text/html; charset=utf-8'), 'location': ('Location', >> 'http://foo.bar.com:8000/workitem/12345'), 'vary': ('Vary', 'Cookie')} >> >> Any one have any ideas as to why it doesn't seem to make it over to nginx? > > There is a request from the client to nginx. > > There is a response from nginx to the client. > > There can be a request from nginx to its upstream, and a response from > upstream to nginx. > > Any of those requests and responses can include headers. > > In your architecture, what "header" do you care about? > > That should tell you which variable value to check. > > http://nginx.org/r/$http_ > > http://nginx.org/r/$sent_http_ > > http://nginx.org/r/$upstream_http_ > > are three different families of variables set within nginx. > > Possibly one of them covers what you want? There are 2 ways requests get to port 8000, which is the port I want to check headers on. One is via a C++ Qt app, and the other is from a python django app. The C++ app sends the request directly to port 8000. With the django app a request is sent to port 8004 and django sends a 301 redirect to 8000. In both cases the header field X-Capdata-Auth is set. And in neither case does my config pick that up. This is what I have: map $http_x_capdata_auth $not_auth { default 1; "authorized" 0; } Is that the correct way to check for that header value? Is there a way for me to dump the headers that it sees on requests to port 8000? From nginx-forum at forum.nginx.org Tue May 31 15:03:42 2016 From: nginx-forum at forum.nginx.org (mastercan) Date: Tue, 31 May 2016 11:03:42 -0400 Subject: Multi certificate support returns Letsencrypt Intermediate Certificate twice In-Reply-To: <20160531131816.GO36620@mdounin.ru> References: <20160531131816.GO36620@mdounin.ru> Message-ID: <1cddb9ce30b58b20ef9d83cad401528c.NginxMailingListEnglish@forum.nginx.org> Thanks a lot for the fast response! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,267240,267249#msg-267249 From francis at daoine.org Tue May 31 15:38:28 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 31 May 2016 16:38:28 +0100 Subject: checking headers In-Reply-To: References: <20160531134557.GB2852@daoine.org> Message-ID: <20160531153828.GC2852@daoine.org> On Tue, May 31, 2016 at 10:26:26AM -0400, Larry Martell wrote: > On Tue, May 31, 2016 at 9:45 AM, Francis Daly wrote: Hi there, > > Possibly one of them covers what you want? > > There are 2 ways requests get to port 8000, which is the port I want > to check headers on. > > One is via a C++ Qt app, and the other is from a python django app. > > The C++ app sends the request directly to port 8000. With the django > app a request is sent to port 8004 and django sends a 301 redirect to > 8000. In both cases the header field X-Capdata-Auth is set. And in > neither case does my config pick that up. This is what I have: > > map $http_x_capdata_auth $not_auth { > default 1; > "authorized" 0; > } > > Is that the correct way to check for that header value? Yes. That checks for the request header from the client. It works for me: == http { map $http_x_capdata_auth $not_auth { default 1; "authorized" 0; } server { listen 8080; location / { if ($not_auth) { return 401 "$http_x_capdata_auth, $not_auth\n"; } return 200 "$http_x_capdata_auth, $not_auth\n"; } } } == curl -v -H 'X-CapData-Auth: authorized' http://127.0.0.1:8080/test --> HTTP/1.1 200 OK; authorized, 0 curl -v http://127.0.0.1:8080/test --> HTTP/1.1 401 Unauthorized: , 1 > Is there a way for me to dump the headers that it sees on requests to port 8000? Within nginx, probably the debug log is simplest. Outside nginx: tcpdump, or ask the client what it sends. f -- Francis Daly francis at daoine.org From reallfqq-nginx at yahoo.fr Tue May 31 16:14:26 2016 From: reallfqq-nginx at yahoo.fr (B.R.) Date: Tue, 31 May 2016 18:14:26 +0200 Subject: forward data from orginal IP to a new IP In-Reply-To: <851da72450071be0bf25e0df942c7631.NginxMailingListEnglish@forum.nginx.org> References: <851da72450071be0bf25e0df942c7631.NginxMailingListEnglish@forum.nginx.org> Message-ID: It seems the lock lies in the fact there is no generic way for the listen directive to specify multiple ports, so you are stuck with that much server blocks. Now, you could use the $server_port variable in the proxy_pass directive, but that brings no real improvement. You could use configuration management tools to template configuration generation, relieving you of the burden of maintaing all those blocks by hand. --- *B. R.* On Tue, May 31, 2016 at 11:11 AM, alajl wrote: > I have this configure file, but it is long-winded? > in nginx, are there having one expression to handle it > > server { > listen 192.168.1.2:10000; > proxy_pass 192.168.0.3:10000; > } > > server { > listen 192.168.1.2:10001; > proxy_pass 192.168.0.3:10001; > } > > server { > listen 192.168.1.2:10002; > proxy_pass 192.168.0.3:10002; > } > > server { > listen 192.168.1.2:10003; > proxy_pass 192.168.0.3:10003; > } > > server { > listen 192.168.1.2:10004; > proxy_pass 192.168.0.3:10004; > } > > server { > listen 192.168.1.2:10005; > proxy_pass 192.168.0.3:10005; > } > > server { > listen 192.168.1.2:10006; > proxy_pass 192.168.0.3:10006; > } > > server { > listen 192.168.1.2:10007; > proxy_pass 192.168.0.3:10007; > } > > server { > listen 192.168.1.2:10008; > proxy_pass 192.168.0.3:10008; > } > > server { > listen 192.168.1.2:10009; > proxy_pass 192.168.0.3:10009; > } > > server { > listen 192.168.1.2:10010; > proxy_pass 192.168.0.4:10010; > } > > server { > listen 192.168.1.2:10011; > proxy_pass 192.168.0.4:10011; > } > > server { > listen 192.168.1.2:10012; > proxy_pass 192.168.0.4:10012; > } > > server { > listen 192.168.1.2:10013; > proxy_pass 192.168.0.4:10013; > } > > server { > listen 192.168.1.2:10014; > proxy_pass 192.168.0.4:10014; > } > > server { > listen 192.168.1.2:10015; > proxy_pass 192.168.0.4:10015; > } > > server { > listen 192.168.1.2:10016; > proxy_pass 192.168.0.4:10016; > } > > server { > listen 192.168.1.2:10017; > proxy_pass 192.168.0.4:10017; > } > > server { > listen 192.168.1.2:10018; > proxy_pass 192.168.0.4:10018; > } > > server { > listen 192.168.1.2:10019; > proxy_pass 192.168.0.4:10019; > } > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,267238,267238#msg-267238 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From larry.martell at gmail.com Tue May 31 16:33:56 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 31 May 2016 12:33:56 -0400 Subject: checking headers In-Reply-To: <20160531153828.GC2852@daoine.org> References: <20160531134557.GB2852@daoine.org> <20160531153828.GC2852@daoine.org> Message-ID: On Tue, May 31, 2016 at 11:38 AM, Francis Daly wrote: > On Tue, May 31, 2016 at 10:26:26AM -0400, Larry Martell wrote: >> On Tue, May 31, 2016 at 9:45 AM, Francis Daly wrote: > > Hi there, > >> > Possibly one of them covers what you want? >> >> There are 2 ways requests get to port 8000, which is the port I want >> to check headers on. >> >> One is via a C++ Qt app, and the other is from a python django app. >> >> The C++ app sends the request directly to port 8000. With the django >> app a request is sent to port 8004 and django sends a 301 redirect to >> 8000. In both cases the header field X-Capdata-Auth is set. And in >> neither case does my config pick that up. This is what I have: >> >> map $http_x_capdata_auth $not_auth { >> default 1; >> "authorized" 0; >> } >> >> Is that the correct way to check for that header value? > > Yes. That checks for the request header from the client. > > It works for me: > > == > http { > map $http_x_capdata_auth $not_auth { > default 1; > "authorized" 0; > } > > server { > listen 8080; > location / { > if ($not_auth) { return 401 "$http_x_capdata_auth, $not_auth\n"; } > return 200 "$http_x_capdata_auth, $not_auth\n"; > } > } > } > == > > curl -v -H 'X-CapData-Auth: authorized' http://127.0.0.1:8080/test > --> HTTP/1.1 200 OK; authorized, 0 > > curl -v http://127.0.0.1:8080/test > --> HTTP/1.1 401 Unauthorized: , 1 > >> Is there a way for me to dump the headers that it sees on requests to port 8000? > > Within nginx, probably the debug log is simplest. > > Outside nginx: tcpdump, or ask the client what it sends. Using curl I can see that ngixn is doing the right thing. Looking at the request coming out of the clients show the header being there. Using tcpdump I do not see the header. I know this is no longer an nginx question, but anyone know why that header would get dropped along the way? From mdounin at mdounin.ru Tue May 31 16:41:29 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 May 2016 19:41:29 +0300 Subject: nginx-1.11.1 Message-ID: <20160531164129.GR36620@mdounin.ru> Changes with nginx 1.11.1 31 May 2016 *) Security: a segmentation fault might occur in a worker process while writing a specially crafted request body to a temporary file (CVE-2016-4450); the bug had appeared in 1.3.9. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue May 31 16:41:48 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 May 2016 19:41:48 +0300 Subject: nginx-1.10.1 Message-ID: <20160531164148.GV36620@mdounin.ru> Changes with nginx 1.10.1 31 May 2016 *) Security: a segmentation fault might occur in a worker process while writing a specially crafted request body to a temporary file (CVE-2016-4450); the bug had appeared in 1.3.9. -- Maxim Dounin http://nginx.org/ From mdounin at mdounin.ru Tue May 31 16:42:43 2016 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 31 May 2016 19:42:43 +0300 Subject: nginx security advisory (CVE-2016-4450) Message-ID: <20160531164243.GZ36620@mdounin.ru> Hello! A problem was identified in nginx code responsible for saving client request body to a temporary file. A specially crafted request might result in worker process crash due to a NULL pointer dereference while writing client request body to a temporary file (CVE-2016-4450). The problem affects nginx 1.3.9 - 1.11.0. The problem is fixed in nginx 1.11.1, 1.10.1. Patch for nginx 1.9.13 - 1.11.0 can be found here: http://nginx.org/download/patch.2016.write.txt Patch for older nginx versions (1.3.9 - 1.9.12): http://nginx.org/download/patch.2016.write2.txt -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue May 31 18:10:26 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 31 May 2016 14:10:26 -0400 Subject: [nginx-announce] nginx-1.10.1 In-Reply-To: <20160531164212.GW36620@mdounin.ru> References: <20160531164212.GW36620@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.10.1 for Windows https://kevinworthington.com/nginxwin1101 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, May 31, 2016 at 12:42 PM, Maxim Dounin wrote: > Changes with nginx 1.10.1 31 May > 2016 > > *) Security: a segmentation fault might occur in a worker process while > writing a specially crafted request body to a temporary file > (CVE-2016-4450); the bug had appeared in 1.3.9. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From kworthington at gmail.com Tue May 31 18:12:23 2016 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 31 May 2016 14:12:23 -0400 Subject: [nginx-announce] nginx-1.11.1 In-Reply-To: <20160531164135.GS36620@mdounin.ru> References: <20160531164135.GS36620@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.11.1 for Windows https://kevinworthington.com/nginxwin1111 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) http://kevinworthington.com/ http://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, May 31, 2016 at 12:41 PM, Maxim Dounin wrote: > Changes with nginx 1.11.1 31 May > 2016 > > *) Security: a segmentation fault might occur in a worker process while > writing a specially crafted request body to a temporary file > (CVE-2016-4450); the bug had appeared in 1.3.9. > > > -- > Maxim Dounin > http://nginx.org/ > > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue May 31 20:19:48 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 31 May 2016 21:19:48 +0100 Subject: checking headers In-Reply-To: References: <20160531134557.GB2852@daoine.org> <20160531153828.GC2852@daoine.org> Message-ID: <20160531201948.GD2852@daoine.org> On Tue, May 31, 2016 at 12:33:56PM -0400, Larry Martell wrote: > On Tue, May 31, 2016 at 11:38 AM, Francis Daly wrote: > > On Tue, May 31, 2016 at 10:26:26AM -0400, Larry Martell wrote: Hi there, > >> The C++ app sends the request directly to port 8000. With the django > >> app a request is sent to port 8004 and django sends a 301 redirect to > >> 8000. In both cases the header field X-Capdata-Auth is set. And in > >> neither case does my config pick that up. This is what I have: > Using curl I can see that ngixn is doing the right thing. Looking at > the request coming out of the clients show the header being there. > Using tcpdump I do not see the header. I know this is no longer an > nginx question, but anyone know why that header would get dropped > along the way? It sounds like your design is that your client sends a http request to port 8004; the http service there returns a 301 redirect to a url on port 8000 and includes a particular response header; and you want your client to follow the redirect by making a new request to port 8000 and include a request header that mirrors the particular response header that you sent. If you are using the client that you wrote, then you can make sure that it does that. If you are using a general http client, it is unlikely to do that. Perhaps an alternate design involving reverse-proxying would be valid? Cheers, f -- Francis Daly francis at daoine.org From francis at daoine.org Tue May 31 20:24:21 2016 From: francis at daoine.org (Francis Daly) Date: Tue, 31 May 2016 21:24:21 +0100 Subject: Buitwith.com showing apache and nginx both. In-Reply-To: References: Message-ID: <20160531202421.GE2852@daoine.org> On Tue, May 31, 2016 at 07:04:30PM +0500, Muhammad Yousuf Khan wrote: Hi there, > When i scan my site with builtwith.com it is showing that i am using both > nginx and apache. When you scan your site with builtwith.com, what do your nginx logs say the requests were? When you make those same requests yourself, what responses do you get? Pay particular attention to the http headers. > Any idea why? Perhaps builtwith.com makes a request that your nginx is configured to reverse-proxy to an apache server, without hiding the Server header. Perhaps builtwith.com scanned your site previously, and show all historical answers. Perhaps builtwith.com uses heuristics which are wrong for your site. f -- Francis Daly francis at daoine.org From larry.martell at gmail.com Tue May 31 20:48:19 2016 From: larry.martell at gmail.com (Larry Martell) Date: Tue, 31 May 2016 16:48:19 -0400 Subject: checking headers In-Reply-To: <20160531201948.GD2852@daoine.org> References: <20160531134557.GB2852@daoine.org> <20160531153828.GC2852@daoine.org> <20160531201948.GD2852@daoine.org> Message-ID: On Tue, May 31, 2016 at 4:19 PM, Francis Daly wrote: > On Tue, May 31, 2016 at 12:33:56PM -0400, Larry Martell wrote: >> On Tue, May 31, 2016 at 11:38 AM, Francis Daly wrote: >> > On Tue, May 31, 2016 at 10:26:26AM -0400, Larry Martell wrote: > > Hi there, > >> >> The C++ app sends the request directly to port 8000. With the django >> >> app a request is sent to port 8004 and django sends a 301 redirect to >> >> 8000. In both cases the header field X-Capdata-Auth is set. And in >> >> neither case does my config pick that up. This is what I have: > >> Using curl I can see that ngixn is doing the right thing. Looking at >> the request coming out of the clients show the header being there. >> Using tcpdump I do not see the header. I know this is no longer an >> nginx question, but anyone know why that header would get dropped >> along the way? > > It sounds like your design is that your client sends a http request to > port 8004; the http service there returns a 301 redirect to a url on port > 8000 and includes a particular response header; and you want your client > to follow the redirect by making a new request to port 8000 and include a > request header that mirrors the particular response header that you sent. With the django app, what you are saying is correct. > If you are using the client that you wrote, then you can make sure that > it does that. > > If you are using a general http client, it is unlikely to do that. I am knida new to all this. The apps were written by someone who quit and then this was all dropped in my lap. I thought I was clear on what a client and server were in life, in this app it's somewhat screwy. What is behind port 8000 is nginx routing to some Angular code that sends a request out. So the Angular code, although client side, is acting like a server in that it is invoked in response to a request. Then it turns about and acts like a client and sends a request out. So, who's the server here? nginx? There are 2 approved ways to send a request to port 8000. One is from an app we wrote that is in C++ and it directly sends the request to port 8000. These requests are always previously authenticated and are good to go. The second is from a django endpoint listening on 8004. It does some authentication and if all is good, redirects to 8000. So with both of these cases I want to request to port 8000 to go through. Then, of course, there are myriad other ways for a request to get port 8000 - from a browser, curl, wget, etc. In all of these cases I want the request to be blocked and return a 401. I was hoping to do this with a custom header, but that appears not to work. Can anyone recommend another way to achieve this? > Perhaps an alternate design involving reverse-proxying would be valid? How would that help me? Thanks! From agentzh at gmail.com Tue May 31 21:20:40 2016 From: agentzh at gmail.com (Yichun Zhang (agentzh)) Date: Tue, 31 May 2016 14:20:40 -0700 Subject: [ANN] OpenResty 1.9.7.5 released Message-ID: Hi folks OpenResty 1.9.7.5 is just out to include the latest official NGINX patch for nginx security advisory (CVE-2016-4450): https://openresty.org/en/download.html Both the (portable) source code distribution and the Win32 binary distribution are provided on this Download page. Changes since the last (formal) version, 1.9.7.4: * bugfix: applied the patch for nginx security advisory (CVE-2016-4450). Just a side note: we are currently busy preparing the next OpenResty 1.9.15.1 release with the NGINX 1.9.15 core as well as a newer version of LuaJIT v2.1. Stay tuned. OpenResty is a full-fledged web platform by bundling the standard Nginx core, Lua/LuaJIT, lots of 3rd-party Nginx modules and Lua libraries, as well as most of their external dependencies. See OpenResty's homepage for details: https://openresty.org/ Best regards, -agentzh From francis at daoine.org Tue May 31 23:06:19 2016 From: francis at daoine.org (Francis Daly) Date: Wed, 1 Jun 2016 00:06:19 +0100 Subject: checking headers In-Reply-To: References: <20160531134557.GB2852@daoine.org> <20160531153828.GC2852@daoine.org> <20160531201948.GD2852@daoine.org> Message-ID: <20160531230619.GF2852@daoine.org> On Tue, May 31, 2016 at 04:48:19PM -0400, Larry Martell wrote: > On Tue, May 31, 2016 at 4:19 PM, Francis Daly wrote: > > On Tue, May 31, 2016 at 12:33:56PM -0400, Larry Martell wrote: Hi there, > > It sounds like your design is that your client sends a http request to > > port 8004; the http service there returns a 301 redirect to a url on port > > 8000 and includes a particular response header; and you want your client > > to follow the redirect by making a new request to port 8000 and include a > > request header that mirrors the particular response header that you sent. > > With the django app, what you are saying is correct. > > > If you are using the client that you wrote, then you can make sure that > > it does that. > > > > If you are using a general http client, it is unlikely to do that. > > I am knida new to all this. The apps were written by someone who quit > and then this was all dropped in my lap. It might be instructive for you to find out *why* they quit. If it was related to them being required to implement a design which they knew can't possibly work, it would be good for you to learn that early. But that's beside the point, here. > I thought I was clear on what > a client and server were in life, in this app it's somewhat screwy. Probably a good thing you can do for you, is take a pencil and paper and write down the intended data flow of this application. Until you are happy that the design is right, you probably can't accept responsibility for implementing it. In general, the app is a chain of events. At each point, one client is making one request of one server. When you can see the data flow design, it will be clearer to you. > What is behind port 8000 is nginx routing to some Angular code that > sends a request out. So the Angular code, although client side, is > acting like a server in that it is invoked in response to a request. I don't follow all of those words, but that's ok: I don't have to. > Then it turns about and acts like a client and sends a request out. > So, who's the server here? nginx? Whatever is making the request is the client at this instant; whatever is receiving the request is the server at this instant. > There are 2 approved ways to send a request to port 8000. One is from > an app we wrote that is in C++ and it directly sends the request to > port 8000. These requests are always previously authenticated and are > good to go. The second is from a django endpoint listening on 8004. It > does some authentication and if all is good, redirects to 8000. So > with both of these cases I want to request to port 8000 to go through. > > Then, of course, there are myriad other ways for a request to get port > 8000 - from a browser, curl, wget, etc. In all of these cases I want > the request to be blocked and return a 401. nginx on port 8000 does not care whether the request came from your C++ app or from my curl command. All it cares about is what it is configured to do: which is to accept a http request that includes the "I promise I am authorised" token. browser, curl, wget, etc, can all include that token, without touching your django app or your C++ program. If that is your design, and the authorisation actually matters for anything, then your design is broken and you need to re-design. > I was hoping to do this with a custom header, but that appears not to > work. Can anyone recommend another way to achieve this? One possibility might be to use auth_request (http://nginx.org/r/auth_request) within nginx to authorise-and-return the content in one step (as far as the client is concerned) in nginx. Another possibility might be to use "X-Accel-Redirect" from the reverse-proxied authorisation-checker. Again, from the client perspective, the request with credentials results in the desired response directly. The current two-step process of one request with credentials which are checked, returning a "I am authorised" token; followed by another request with that token which the second server does not authenticate at all; leads to you being able to use "curl" to pretend to be authorised. > > Perhaps an alternate design involving reverse-proxying would be valid? > > How would that help me? As above, nginx could reverse-proxy to the authorisation checker; or alternatively the django app could reverse-proxy to nginx; and then you could put in external (firewall?) rules which mean that only your C++ app and your django app can get to nginx on port 8000. Good luck with it, f -- Francis Daly francis at daoine.org