From vtol at gmx.net Tue Jan 1 16:10:31 2019 From: vtol at gmx.net (=?UTF-8?B?0b3SieG2rOG4s+KEoA==?=) Date: Tue, 1 Jan 2019 17:10:31 +0100 Subject: stable | mainline - encoding error ssl_stapling_file Message-ID: An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 1 16:48:36 2019 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Tue, 01 Jan 2019 11:48:36 -0500 Subject: NGINX not passing header to proxy In-Reply-To: <20181231165848.327m2zt3zwr7l26a@daoine.org> References: <20181231165848.327m2zt3zwr7l26a@daoine.org> Message-ID: HI Francis, Turns out it's not passing the other header I need, "x-api-key". $ curl -i -H "x-api-key:somevalue" -H "userAccount:someaccount" https://localhost/somepath --insecure HTTP/1.1 200 OK Server: nginx/1.10.3 (Ubuntu) Date: Tue, 01 Jan 2019 16:47:15 GMT Content-Type: application/octet-stream Content-Length: 91 Connection: keep-alive request: GET /api/ecfr/0.1.0?index=title48 HTTP/1.1; x-api-key: ; userAccount: someaccount Francis Daly Wrote: ------------------------------------------------------- > On Mon, Dec 31, 2018 at 10:01:54AM -0500, Sesshomurai wrote: > > Hi there, > > > I am having a problem with NGINX not forwarding a request header > to my > > proxy. > > > > Here is the location: > > > > location /xyz { > > proxy_pass_request_headers on; > > proxy_pass https://someserver/; > > } > > > > I make the call passing "userAccount" header and it never gets sent > to the > > proxy, but if I declare it in the location, it does get passed. > > Other headers are passed to proxy. > > You seem to report that when you do > > curl -H userAccount:abc http://nginx/xyz > > you want nginx to make a request to https://someserver/ including the > http header userAccount; and that nginx does make the request but does > not include the header. Is that correct? > > A simple test, using http and not https, seems to show it working as > you > want here. Does the same test work for you? If so, does using https > make > a difference to you? > > == > # "main" server > server { > listen 8090; > location /xyz { > proxy_pass http://127.0.0.1:8091/; > } > } > > # "upstream" server > server { > listen 8091; > location / { > return 200 "request: $request; userAccount: > $http_useraccount\n"; > } > } > == > > $ curl -H userAccount:abc http://127.0.0.1:8090/xyz > request: GET / HTTP/1.0; userAccount: abc > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282524#msg-282524 From nginx-forum at forum.nginx.org Tue Jan 1 16:58:45 2019 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Tue, 01 Jan 2019 11:58:45 -0500 Subject: NGINX not passing header to proxy In-Reply-To: References: <20181231165848.327m2zt3zwr7l26a@daoine.org> Message-ID: If I run the curl using the proxy URL directly it works just fine. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282525#msg-282525 From sca at andreasschulze.de Tue Jan 1 17:04:30 2019 From: sca at andreasschulze.de (A. Schulze) Date: Tue, 1 Jan 2019 18:04:30 +0100 Subject: stable | mainline - encoding error ssl_stapling_file In-Reply-To: References: Message-ID: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> Am 01.01.19 um 17:10 schrieb ?????: > Hi, > > would appreciate to get this (weird)? error sorted/resolved. Having looked up public sources I could not find a remedy and thus placing my hope on this list. > > ssl_stapling_file foo.bar.der; > ssl_stapling? on; > > nginx -t then produces: > > [emerg] 24249#24249: d2i_OCSP_RESPONSE_bio("/srv/ca/certs/ocsp_to_lan_3.cert.der") failed (SSL: error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag error:0D06C03A:asn1 encoding routines:asn1_d2i_ex_primitive:nested asn1 error error:0D08303A:asn1 encoding routines:asn1_template_noexp_d2i:nested asn1 error:Field=responseStatus, Type=OCSP_RESPONSE) > > WIth: > > ?# ssl_stapling? on; > > there is no such error?! > > openssl x509 -noout -text -inform der -in foo.bar.der prints the certificate just fine. Having switched between utf8 and ascii did not make a difference either, same outcome. > > openssl asn1parse -inform DER -in foo.bar.der is also printing the values just fine. Hello & happy new year! you did not mention, how you generate "foo.bar.der". nginx stapling support may work in two operational modes: 1. only "ssl_stapling on" and no "ssl_stapling_file" given. -> upon the first request nginx will fetch OCSP sapling data from CA's OCSP-Server and send this information as part of the second any any following requests 2. "ssl_stapling on" and "ssl_stapling_file" given. -> you have to manually provide OCSP data. nginx will server any request including these OCSP data. The file you reference as "ssl_stapling_file" could be generated by this command: $ openssl ocsp -no_nonce -respout "${OCSP_STAPLING_FILE}" -CAfile "${CA_CHAIN}" -issuer "${ISSUER}" -cert "${CERT}" -url "${OCSP_URI}" $ kill -HUP $( cat /path/to/nginx.pid ) that has to be done again after some days. Andreas From vtol at gmx.net Tue Jan 1 17:24:04 2019 From: vtol at gmx.net (=?UTF-8?B?0b3SieG2rOG4s+KEoA==?=) Date: Tue, 1 Jan 2019 18:24:04 +0100 Subject: stable | mainline - encoding error ssl_stapling_file In-Reply-To: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> References: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> Message-ID: An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 1 17:32:58 2019 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Tue, 01 Jan 2019 12:32:58 -0500 Subject: NGINX not passing header to proxy In-Reply-To: References: <20181231165848.327m2zt3zwr7l26a@daoine.org> Message-ID: Actually, had the wrong variable name in location for x-api-key, so now I get the correct request header printed back to me $ curl -i -H "x-api-key:somekey" -H "userAccount:someaccount" https://localhost/ --insecure HTTP/1.1 200 OK Server: nginx/1.10.3 (Ubuntu) Date: Tue, 01 Jan 2019 17:30:28 GMT Content-Type: application/octet-stream Content-Length: 127 Connection: keep-alive request: GET / HTTP/1.1; x-api-key: somekey; userAccount: someaccount But the proxy still does not like what NGINX is sending vs. me sending it directly with curl. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282528#msg-282528 From vtol at gmx.net Tue Jan 1 18:40:04 2019 From: vtol at gmx.net (=?UTF-8?B?0b3SieG2rOG4s+KEoA==?=) Date: Tue, 1 Jan 2019 19:40:04 +0100 Subject: stable | mainline - encoding error ssl_stapling_file In-Reply-To: References: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> Message-ID: <5bd596ca-7c07-534e-fe5c-e608bd845340@gmx.net> An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Jan 2 12:48:53 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Jan 2019 12:48:53 +0000 Subject: NGINX not passing header to proxy In-Reply-To: References: <20181231165848.327m2zt3zwr7l26a@daoine.org> Message-ID: <20190102124853.347wr5bq5ryrdd2a@daoine.org> On Tue, Jan 01, 2019 at 12:32:58PM -0500, Sesshomurai wrote: Hi there, > Actually, had the wrong variable name in location for x-api-key, so now I > get the correct request header printed back to me Ok, so the specific issue in the Subject: line is apparently no longer a problem. > But the proxy still does not like what NGINX is sending vs. me sending it > directly with curl. What does your nginx do that you do not want it to do? Or, what does it not do that you want it to do? Can you show a curl command that goes direct to your upstream that "works", and a similar curl command that goes to your nginx that "fails"? Perhaps you can get your upstream server to show exactly what it gets in both cases, and you can spot the difference between the two. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Jan 2 16:57:51 2019 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Wed, 02 Jan 2019 11:57:51 -0500 Subject: NGINX not passing header to proxy In-Reply-To: <20190102124853.347wr5bq5ryrdd2a@daoine.org> References: <20190102124853.347wr5bq5ryrdd2a@daoine.org> Message-ID: <2776930fc6c2848e5339f51a374d870e.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Yeah, there might be more going on here that meets the eye. But here are the curl commands that are producing different results. I am looking into the upstream server as well, but the difference would imply something about NGINX is not the same as running curl from command line. Here are the details. 1. RETURNING THE REQUEST HEADERS FROM NGINX. OK ============================================================================= $ curl -i -H "x-api-key:APIKEY" -H "userAccount:USER1" https://localhost/api/ecfr/0.1.0?index=title48 --insecure HTTP/1.1 200 OK Server: nginx/1.10.3 (Ubuntu) Date: Wed, 02 Jan 2019 16:41:57 GMT Content-Type: application/octet-stream Content-Length: 127 Connection: keep-alive userAccount: darren1 request: GET /api/ecfr/0.1.0?index=title48 HTTP/1.1; x-api-key: APIKEY; userAccount: USER1 2. RETURNING RESPONSE FROM UPSTREAM SERVER. NGINX ON LOCALHOST. BROKE ============================================================================= $ curl -i -H "x-api-key:APIKEY" -H "userAccount:USER1" https://localhost/api/ecfr/0.1.0?index=title48 --insecure HTTP/1.1 403 Forbidden Server: nginx/1.10.3 (Ubuntu) Date: Wed, 02 Jan 2019 16:48:20 GMT Content-Type: application/json Content-Length: 23 Connection: keep-alive x-amzn-RequestId: 35f52fc4-0eae-11e9-92be-9ba3ab4becfa x-amzn-ErrorType: ForbiddenException x-amz-apigw-id: xxxxxx {"message":"Forbidden"} 2a. NGINX PROXY LOCATION ============================= location /api/ecfr/0.1.0 { proxy_pass https://upstreamserver/0_1_0/ecfr; proxy_pass_request_headers on; } 3. CURL COMMAND DIRECT TO UPSTREAM SERVER. OK $ curl -i -H "x-api-key:APIKEY" -H "userAccount:USER1" https://upstreamserver/0_1_0/ecfr?index=title48 --insecure HTTP/1.1 200 OK Date: Wed, 02 Jan 2019 16:45:29 GMT Content-Type: application/json Content-Length: 247 Connection: keep-alive x-amzn-RequestId: d05fc938-0ead-11e9-a711-a1281316abd2 x-amz-apigw-id: xxxxxx X-Amzn-Trace-Id: Root=1-5c2ceaa9-496d7b8e0422a9c75132e018 PS. I do have a ticket open with the upstream server provider, but so far I don't think this problem should be happening based on how things should/do work. Francis Daly Wrote: ------------------------------------------------------- > On Tue, Jan 01, 2019 at 12:32:58PM -0500, Sesshomurai wrote: > > Hi there, > > > Actually, had the wrong variable name in location for x-api-key, so > now I > > get the correct request header printed back to me > > Ok, so the specific issue in the Subject: line is apparently no longer > a problem. > > > But the proxy still does not like what NGINX is sending vs. me > sending it > > directly with curl. > > What does your nginx do that you do not want it to do? Or, what does > it > not do that you want it to do? > > Can you show a curl command that goes direct to your upstream that > "works", and a similar curl command that goes to your nginx that > "fails"? > > Perhaps you can get your upstream server to show exactly what it gets > in both cases, and you can spot the difference between the two. > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282533#msg-282533 From nginx-forum at forum.nginx.org Wed Jan 2 17:17:47 2019 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Wed, 02 Jan 2019 12:17:47 -0500 Subject: NGINX not passing header to proxy In-Reply-To: <2776930fc6c2848e5339f51a374d870e.NginxMailingListEnglish@forum.nginx.org> References: <20190102124853.347wr5bq5ryrdd2a@daoine.org> <2776930fc6c2848e5339f51a374d870e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <216d26410a257f97dec604a8ee5bde72.NginxMailingListEnglish@forum.nginx.org> Here is what NGINX is trying to pass upstream. I set up a local mini HTTP server to print the requests. curl -i -H "x-api-key:APIKEY" -H "userAccount:USER1" https://localhost/test --insecure 127.0.0.1 - - [02/Jan/2019 12:13:51] "GET / HTTP/1.0" 200 - ERROR:root:Host: localhost X-Real-IP: 127.0.0.1 Connection: close User-Agent: curl/7.61.0 Accept: */* x-api-key: APIKEY userAccount: USER1 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282534#msg-282534 From francis at daoine.org Wed Jan 2 20:17:45 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 2 Jan 2019 20:17:45 +0000 Subject: NGINX not passing header to proxy In-Reply-To: <2776930fc6c2848e5339f51a374d870e.NginxMailingListEnglish@forum.nginx.org> References: <20190102124853.347wr5bq5ryrdd2a@daoine.org> <2776930fc6c2848e5339f51a374d870e.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190102201745.pkac522tnaj2tyo6@daoine.org> On Wed, Jan 02, 2019 at 11:57:51AM -0500, Sesshomurai wrote: Hi there, > Yeah, there might be more going on here that meets the eye. But here are > the curl commands that are producing different results. I am looking into > the upstream server as well, but the difference would imply something about > NGINX is not the same as running curl from command line. Correct - nginx and curl make different requests (by default). If you use "curl -v", it should show the actual request that it makes. If you enable the debug log in nginx, you might be able to read from it what request nginx makes. (Ideally: enable fuller logging on the upstream server and compare the requests there -- but it sounds like you d not directly control the upstream server.) Some differences between nginx and curl which may be important are: nginx will make a http/1.0 request; curl will make a http/1.1 request. nginx will not include SNI in the http connection; curl will include it. If you can easily test things, perhaps try making curl request of the upstream, using http/1.0 and/or SSLv3 (or some https option that omits SNI). If you can find a curl request that fails in the same way that the nginx request fails, that may point at the nginx config changes that will allow the nginx request succeed. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jan 3 23:17:44 2019 From: nginx-forum at forum.nginx.org (Sesshomurai) Date: Thu, 03 Jan 2019 18:17:44 -0500 Subject: NGINX not passing header to proxy In-Reply-To: <20190102201745.pkac522tnaj2tyo6@daoine.org> References: <20190102201745.pkac522tnaj2tyo6@daoine.org> Message-ID: <00e2c727c50ad6e0c10d045da7bad54d.NginxMailingListEnglish@forum.nginx.org> Hi, This problem was resolved with the help of the cloud provider. Turns out I need to set the Host header proxy_set_header Host upstreamhostname; It wasn't exactly clear any any docs (on their end) this was necessary but glad it's solved and makes some sense as well I suppose. Thanks for all your suggestions Francis. Francis Daly Wrote: ------------------------------------------------------- > On Wed, Jan 02, 2019 at 11:57:51AM -0500, Sesshomurai wrote: > > Hi there, > > > Yeah, there might be more going on here that meets the eye. But > here are > > the curl commands that are producing different results. I am looking > into > > the upstream server as well, but the difference would imply > something about > > NGINX is not the same as running curl from command line. > > Correct - nginx and curl make different requests (by default). > > If you use "curl -v", it should show the actual request that it makes. > If > you enable the debug log in nginx, you might be able to read from it > what request nginx makes. > > (Ideally: enable fuller logging on the upstream server and compare the > requests there -- but it sounds like you d not directly control the > upstream server.) > > Some differences between nginx and curl which may be important are: > > nginx will make a http/1.0 request; curl will make a http/1.1 request. > > nginx will not include SNI in the http connection; curl will include > it. > > If you can easily test things, perhaps try making curl request of > the upstream, using http/1.0 and/or SSLv3 (or some https option that > omits SNI). > > If you can find a curl request that fails in the same way that the > nginx request fails, that may point at the nginx config changes that > will allow the nginx request succeed. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282518,282540#msg-282540 From mdounin at mdounin.ru Fri Jan 4 04:35:33 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 4 Jan 2019 07:35:33 +0300 Subject: stable | mainline - encoding error ssl_stapling_file In-Reply-To: References: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> Message-ID: <20190104043533.GJ99070@mdounin.ru> Hello! On Tue, Jan 01, 2019 at 06:24:04PM +0100, ???? wrote: > Am 01.01.19 um 17:10 schrieb ?????: > > Hi, > > would appreciate to get this (weird) error sorted/resolved. Having looked up pu > blic sources I could not find a remedy and thus placing my hope on this list. > > ssl_stapling_file foo.bar.der; > ssl_stapling on; > > nginx -t then produces: > > [emerg] 24249#24249: d2i_OCSP_RESPONSE_bio("/srv/ca/certs/ocsp_to_lan_3.cert.der > ") failed (SSL: error:0D0680A8:asn1 encoding routines:asn1_check_tlen:wrong tag > error:0D06C03A:asn1 encoding routines:asn1_d2i_ex_primitive:nested asn1 error er > ror:0D08303A:asn1 encoding routines:asn1_template_noexp_d2i:nested asn1 error:Fi > eld=responseStatus, Type=OCSP_RESPONSE) [...] > I generate the file the way I would trust is common standard/practice > (?) > 1. openssl genpkey -algorithm EC -pkeyopt ec_paramgen_curve:P-384 -out > foo.bar.key.pem -aes-256-cbc > 2. openssl req -config foo.bar.cnf -key foo.bar.key.pem -new -out > foo.bar.csr.pem > 3. openssl ca -config foobar.ca.cnf -extensions v3_foo-bar -days 365 > -notext -in foo.bar.csr.pem -out foo.bar.cert.pem > 4. openssl x509 -outform DER -in foo.bar.cert.pem -out > foo.bar.cert.der > > It generates a valid cert and openssl has no encoding issues. What is > difference and why this should not work? And why has the other command > to be done again after some days? The "ssl_stapling_file" directive needs an OCSP response obtained from your certificate authority, not a certificate. As you are trying to put a certificate instead, parsing expectedly fails. -- Maxim Dounin http://mdounin.ru/ From vtol at gmx.net Fri Jan 4 04:57:56 2019 From: vtol at gmx.net (=?UTF-8?B?0b3SieG2rOG4s+KEoA==?=) Date: Fri, 4 Jan 2019 05:57:56 +0100 Subject: stable | mainline - encoding error ssl_stapling_file In-Reply-To: <20190104043533.GJ99070@mdounin.ru> References: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> <20190104043533.GJ99070@mdounin.ru> Message-ID: <1862eec3-2033-298d-980f-61f99eed740f@gmx.net> An HTML attachment was scrubbed... URL: From francis at daoine.org Fri Jan 4 11:47:03 2019 From: francis at daoine.org (Francis Daly) Date: Fri, 4 Jan 2019 11:47:03 +0000 Subject: NGINX not passing header to proxy In-Reply-To: <00e2c727c50ad6e0c10d045da7bad54d.NginxMailingListEnglish@forum.nginx.org> References: <20190102201745.pkac522tnaj2tyo6@daoine.org> <00e2c727c50ad6e0c10d045da7bad54d.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190104114703.aqubryr4ivhmf6gj@daoine.org> On Thu, Jan 03, 2019 at 06:17:44PM -0500, Sesshomurai wrote: Hi there, > This problem was resolved with the help of the cloud provider. Turns out I > need to set the Host header > > proxy_set_header Host upstreamhostname; Good that you now have it working. Note that nginx should set the Host: header to the name used in the proxy_pass directive, so that if your config used the same upstreamserver string in proxy_pass https://upstreamserver/0_1_0/ecfr; as in the test curl command $ curl -i -H "x-api-key:APIKEY" -H "userAccount:USER1" https://upstreamserver/0_1_0/ecfr?index=title48 --insecure then it should have Just Worked. But perhaps there were different words edited to the same word for the email. Thanks for providing the final outcome, so that the next person with the same issue will have a chance to find the answer in the mailarchive. Cheers, f -- Francis Daly francis at daoine.org From ottavio at campana.vi.it Fri Jan 4 17:18:51 2019 From: ottavio at campana.vi.it (Ottavio Campana) Date: Fri, 4 Jan 2019 18:18:51 +0100 Subject: How do I get the file descriptor of an incoming request? Message-ID: Hello, I am trying to write my first module for nginx. I have a ngx_http_request_t *r . How can I get the file descriptor where the request comes from? Thank you, Ottavio -- Non c'? pi? forza nella normalit?, c'? solo monotonia -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Sat Jan 5 12:39:34 2019 From: arut at nginx.com (Roman Arutyunyan) Date: Sat, 5 Jan 2019 15:39:34 +0300 Subject: How do I get the file descriptor of an incoming request? In-Reply-To: References: Message-ID: <20190105123934.GR8528@Romans-MacBook-Air.local> Hello Ottavio, On Fri, Jan 04, 2019 at 06:18:51PM +0100, Ottavio Campana wrote: > Hello, > > I am trying to write my first module for nginx. I have a ngx_http_request_t > *r . How can I get the file descriptor where the request comes from? r->connection->fd > Thank you, > > Ottavio > > -- > Non c'? pi? forza nella normalit?, c'? solo monotonia > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From larry.martell at gmail.com Sat Jan 5 21:01:17 2019 From: larry.martell at gmail.com (Larry Martell) Date: Sat, 5 Jan 2019 16:01:17 -0500 Subject: intermittent No module named context_processors when using nginx/uwsgi Message-ID: I am having an odd interment django problem. I have an app which is deployed at 30 different sites, some with apache and wsgi and some with nginx and uwsgi. At only the nginx/uwsgi sites and only intermittently, users will get the error No module named context_processors. I am only posting it here because the issue only occurs when using nginx/uwsgi and never with apache/wsgi. I have posted this to both the Django group and stackoverflow, but had not received any help. It may happen on a page that was previously accessed with no error and upon refreshing the same page it will come up fine. It will not occur for months, then happen a few times in one day. Here is a typical traceback: Internal Server Error: / Traceback (most recent call last): File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/exception.py", line 35, in inner response = get_response(request) File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py", line 158, in _get_response response = self.process_exception_by_middleware(e, request) File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py", line 156, in _get_response response = response.render() File "/usr/local/lib/python3.5/dist-packages/django/template/response.py", line 106, in render self.content = self.rendered_content File "/usr/local/lib/python3.5/dist-packages/django/template/response.py", line 83, in rendered_content content = template.render(context, self._request) File "/usr/local/lib/python3.5/dist-packages/django/template/backends/django.py", line 61, in render return self.template.render(context) File "/usr/local/lib/python3.5/dist-packages/django/template/base.py", line 173, in render with context.bind_template(self): File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__ return next(self.gen) File "/usr/local/lib/python3.5/dist-packages/django/template/context.py", line 246, in bind_template processors = (template.engine.template_context_processors + File "/usr/local/lib/python3.5/dist-packages/django/utils/functional.py", line 36, in __get__ res = instance.__dict__[self.name] = self.func(instance) File "/usr/local/lib/python3.5/dist-packages/django/template/engine.py", line 85, in template_context_processors return tuple(import_string(path) for path in context_processors) File "/usr/local/lib/python3.5/dist-packages/django/template/engine.py", line 85, in return tuple(import_string(path) for path in context_processors) File "/usr/local/lib/python3.5/dist-packages/django/utils/module_loading.py", line 17, in import_string module = import_module(module_path) File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module return _bootstrap._gcd_import(name[level:], package, level) File "", line 986, in _gcd_import File "", line 969, in _find_and_load File "", line 956, in _find_and_load_unlocked ImportError: No module named 'ui.context_processors' That file does exist and is readable: -rw-rw-r-- 1 ubuntu ubuntu 1059 May 2 2018 ui/context_processors.py And here is my TEMPLATES setting: TEMPLATES = [ { 'BACKEND': 'django.template.backends.django.DjangoTemplates', 'DIRS': [ os.path.join(BASE_DIR, 'ui/templates'), os.path.join(BASE_DIR, 'app/dse/templates'), os.path.join(BASE_DIR, 'core/reports/templates'), ], 'APP_DIRS': True, 'OPTIONS': { 'context_processors': [ 'django.template.context_processors.debug', 'django.template.context_processors.request', 'django.contrib.auth.context_processors.auth', 'django.contrib.messages.context_processors.messages', 'context_processors.config', 'ui.context_processors.navigation', 'core.appmngr.context_processor', ], }, }, ] As I said it's intermittent. Anyone have any ideas on what it could be and/or how to debug it? From rpaprocki at fearnothingproductions.net Sat Jan 5 21:25:28 2019 From: rpaprocki at fearnothingproductions.net (Robert Paprocki) Date: Sat, 5 Jan 2019 13:25:28 -0800 Subject: intermittent No module named context_processors when using nginx/uwsgi In-Reply-To: References: Message-ID: <51944A9B-DB35-454F-92B8-9987A262B5B6@fearnothingproductions.net> Given that the stack trace is from Python, it?s not an nginx configuration issue. Are you reverse proxying from nginx multiple uwsgi backgrounds that have different configuration? Sent from my iPhone > On Jan 5, 2019, at 13:01, Larry Martell wrote: > > I am having an odd interment django problem. I have an app which is > deployed at 30 different sites, some with apache and wsgi and some > with nginx and uwsgi. At only the nginx/uwsgi sites and only > intermittently, users will get the error No module named > context_processors. > > I am only posting it here because the issue only occurs when using > nginx/uwsgi and never with apache/wsgi. I have posted this to both the > Django group and stackoverflow, but had not received any help. > > It may happen on a page that was previously accessed with no error and > upon refreshing the same page it will come up fine. It will not occur > for months, then happen a few times in one day. > > Here is a typical traceback: > > Internal Server Error: / > Traceback (most recent call last): > File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/exception.py", > line 35, in inner > response = get_response(request) > File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py", > line 158, in _get_response > response = self.process_exception_by_middleware(e, request) > File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py", > line 156, in _get_response > response = response.render() > File "/usr/local/lib/python3.5/dist-packages/django/template/response.py", > line 106, in render > self.content = self.rendered_content > File "/usr/local/lib/python3.5/dist-packages/django/template/response.py", > line 83, in rendered_content > content = template.render(context, self._request) > File "/usr/local/lib/python3.5/dist-packages/django/template/backends/django.py", > line 61, in render > return self.template.render(context) > File "/usr/local/lib/python3.5/dist-packages/django/template/base.py", > line 173, in render > with context.bind_template(self): > File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__ > return next(self.gen) > File "/usr/local/lib/python3.5/dist-packages/django/template/context.py", > line 246, in bind_template > processors = (template.engine.template_context_processors + > File "/usr/local/lib/python3.5/dist-packages/django/utils/functional.py", > line 36, in __get__ > res = instance.__dict__[self.name] = self.func(instance) > File "/usr/local/lib/python3.5/dist-packages/django/template/engine.py", > line 85, in template_context_processors > return tuple(import_string(path) for path in context_processors) > File "/usr/local/lib/python3.5/dist-packages/django/template/engine.py", > line 85, in > return tuple(import_string(path) for path in context_processors) > File "/usr/local/lib/python3.5/dist-packages/django/utils/module_loading.py", > line 17, in import_string > module = import_module(module_path) > File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module > return _bootstrap._gcd_import(name[level:], package, level) > File "", line 986, in _gcd_import > File "", line 969, in _find_and_load > File "", line 956, in _find_and_load_unlocked > ImportError: No module named 'ui.context_processors' > > That file does exist and is readable: > > -rw-rw-r-- 1 ubuntu ubuntu 1059 May 2 2018 ui/context_processors.py > > And here is my TEMPLATES setting: > > TEMPLATES = [ > { > 'BACKEND': 'django.template.backends.django.DjangoTemplates', > 'DIRS': [ > os.path.join(BASE_DIR, 'ui/templates'), > os.path.join(BASE_DIR, 'app/dse/templates'), > os.path.join(BASE_DIR, 'core/reports/templates'), > ], > 'APP_DIRS': True, > 'OPTIONS': { > 'context_processors': [ > 'django.template.context_processors.debug', > 'django.template.context_processors.request', > 'django.contrib.auth.context_processors.auth', > 'django.contrib.messages.context_processors.messages', > 'context_processors.config', > 'ui.context_processors.navigation', > 'core.appmngr.context_processor', > ], > }, > }, > ] > > As I said it's intermittent. Anyone have any ideas on what it could be > and/or how to debug it? > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From larry.martell at gmail.com Sat Jan 5 22:26:36 2019 From: larry.martell at gmail.com (Larry Martell) Date: Sat, 5 Jan 2019 17:26:36 -0500 Subject: intermittent No module named context_processors when using nginx/uwsgi In-Reply-To: <51944A9B-DB35-454F-92B8-9987A262B5B6@fearnothingproductions.net> References: <51944A9B-DB35-454F-92B8-9987A262B5B6@fearnothingproductions.net> Message-ID: Yes, the stack trace is from python, but I only get the issue using nginx and uwsgi with the same django code. As to your question, I have 2 different deployments of this app with nginx and uwsgi. One is using django1.9 and python 2.7 and a single uwsgi config and the second is using django2.0 and python 3.5 and does have multiple uwsgi configs and I get this error on both. On Sat, Jan 5, 2019 at 4:25 PM Robert Paprocki wrote: > > Given that the stack trace is from Python, it?s not an nginx configuration issue. Are you reverse proxying from nginx multiple uwsgi backgrounds that have different configuration? > > > > On Jan 5, 2019, at 13:01, Larry Martell wrote: > > > > I am having an odd interment django problem. I have an app which is > > deployed at 30 different sites, some with apache and wsgi and some > > with nginx and uwsgi. At only the nginx/uwsgi sites and only > > intermittently, users will get the error No module named > > context_processors. > > > > I am only posting it here because the issue only occurs when using > > nginx/uwsgi and never with apache/wsgi. I have posted this to both the > > Django group and stackoverflow, but had not received any help. > > > > It may happen on a page that was previously accessed with no error and > > upon refreshing the same page it will come up fine. It will not occur > > for months, then happen a few times in one day. > > > > Here is a typical traceback: > > > > Internal Server Error: / > > Traceback (most recent call last): > > File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/exception.py", > > line 35, in inner > > response = get_response(request) > > File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py", > > line 158, in _get_response > > response = self.process_exception_by_middleware(e, request) > > File "/usr/local/lib/python3.5/dist-packages/django/core/handlers/base.py", > > line 156, in _get_response > > response = response.render() > > File "/usr/local/lib/python3.5/dist-packages/django/template/response.py", > > line 106, in render > > self.content = self.rendered_content > > File "/usr/local/lib/python3.5/dist-packages/django/template/response.py", > > line 83, in rendered_content > > content = template.render(context, self._request) > > File "/usr/local/lib/python3.5/dist-packages/django/template/backends/django.py", > > line 61, in render > > return self.template.render(context) > > File "/usr/local/lib/python3.5/dist-packages/django/template/base.py", > > line 173, in render > > with context.bind_template(self): > > File "/usr/lib/python3.5/contextlib.py", line 59, in __enter__ > > return next(self.gen) > > File "/usr/local/lib/python3.5/dist-packages/django/template/context.py", > > line 246, in bind_template > > processors = (template.engine.template_context_processors + > > File "/usr/local/lib/python3.5/dist-packages/django/utils/functional.py", > > line 36, in __get__ > > res = instance.__dict__[self.name] = self.func(instance) > > File "/usr/local/lib/python3.5/dist-packages/django/template/engine.py", > > line 85, in template_context_processors > > return tuple(import_string(path) for path in context_processors) > > File "/usr/local/lib/python3.5/dist-packages/django/template/engine.py", > > line 85, in > > return tuple(import_string(path) for path in context_processors) > > File "/usr/local/lib/python3.5/dist-packages/django/utils/module_loading.py", > > line 17, in import_string > > module = import_module(module_path) > > File "/usr/lib/python3.5/importlib/__init__.py", line 126, in import_module > > return _bootstrap._gcd_import(name[level:], package, level) > > File "", line 986, in _gcd_import > > File "", line 969, in _find_and_load > > File "", line 956, in _find_and_load_unlocked > > ImportError: No module named 'ui.context_processors' > > > > That file does exist and is readable: > > > > -rw-rw-r-- 1 ubuntu ubuntu 1059 May 2 2018 ui/context_processors.py > > > > And here is my TEMPLATES setting: > > > > TEMPLATES = [ > > { > > 'BACKEND': 'django.template.backends.django.DjangoTemplates', > > 'DIRS': [ > > os.path.join(BASE_DIR, 'ui/templates'), > > os.path.join(BASE_DIR, 'app/dse/templates'), > > os.path.join(BASE_DIR, 'core/reports/templates'), > > ], > > 'APP_DIRS': True, > > 'OPTIONS': { > > 'context_processors': [ > > 'django.template.context_processors.debug', > > 'django.template.context_processors.request', > > 'django.contrib.auth.context_processors.auth', > > 'django.contrib.messages.context_processors.messages', > > 'context_processors.config', > > 'ui.context_processors.navigation', > > 'core.appmngr.context_processor', > > ], > > }, > > }, > > ] > > > > As I said it's intermittent. Anyone have any ideas on what it could be > > and/or how to debug it? > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lagged at gmail.com Sun Jan 6 13:07:00 2019 From: lagged at gmail.com (Andrei) Date: Sun, 6 Jan 2019 15:07:00 +0200 Subject: Cache vs expires time Message-ID: Hello, I was wondering how can I force cache of a $request_uri (/abc) for 10 minutes, but set the browser expires headers for 5 minutes? -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Mon Jan 7 01:53:10 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Jan 2019 04:53:10 +0300 Subject: stable | mainline - encoding error ssl_stapling_file In-Reply-To: <1862eec3-2033-298d-980f-61f99eed740f@gmx.net> References: <1e3d3bb0-ea6b-d6c3-7d4a-c6791b384cd2@andreasschulze.de> <20190104043533.GJ99070@mdounin.ru> <1862eec3-2033-298d-980f-61f99eed740f@gmx.net> Message-ID: <20190107015310.GM99070@mdounin.ru> Hello! On Fri, Jan 04, 2019 at 05:57:56AM +0100, ???? wrote: > On 04.01.2019 05:35, Maxim Dounin wrote: > > The "ssl_stapling_file" directive needs an OCSP response obtained > from your certificate authority, not a certificate. As you are > trying to put a certificate instead, parsing expectedly fails. > > Thanks for the explanation which was not clear to me from the online > documentation. The documentation is pretty clear - it says you need an OCSP response, and it references appropriate openssl subcommand to generate one (http://nginx.org/r/ssl_stapling_file): : When set, the stapled OCSP response will be taken from the specified file : instead of querying the OCSP responder specified in the server certificate. : : The file should be in the DER format as produced by the ?openssl ocsp? : command. > So basically nginx does not work as an OCSP responder > for domains with self-signed certificates unless the domain deploys its > own responder. Too bad as I had hoped that the "ssl_stapling_file" > directive would be able to process an OSCP certificate rather than a > response from a responder. Using OCSP (or any other revocation checking mechanism) with self-signed certificates simply does not make sense: as long as the certificate is compromissed, everything signed by this certificate is compromissed too, including any possible OCSP responses. OCSP stapling might make sense if you are instead running an internal CA and use certificates signed by this CA, but the CA does not have an OCSP responder configured. In this case, you can produce an OCSP response using the "openssl ocsp" command. Please refer to its manual page ("man ocsp") for details. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Mon Jan 7 02:47:47 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 7 Jan 2019 05:47:47 +0300 Subject: Cache vs expires time In-Reply-To: References: Message-ID: <20190107024746.GN99070@mdounin.ru> Hello! On Sun, Jan 06, 2019 at 03:07:00PM +0200, Andrei wrote: > I was wondering how can I force cache of a $request_uri (/abc) for 10 > minutes, but set the browser expires headers for 5 minutes? The most basic options are: - You can set Expires from your backend as desired for browser caching, and use the X-Accel-Expires header to set caching time for nginx (see http://nginx.org/r/proxy_cache_valid). - You can configure nginx to ignore Expires and Cache-Control as set by your backend (see http://nginx.org/r/proxy_ignore_headers), so these will be used only by browsers, and set caching time for nginx manually with proxy_cache_valid (see http://nginx.org/r/proxy_cache_valid). -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Jan 7 15:19:49 2019 From: nginx-forum at forum.nginx.org (kycedbi) Date: Mon, 07 Jan 2019 10:19:49 -0500 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: References: Message-ID: Frank Liu Wrote: ------------------------------------------------------- > nginx doesn't officially support geoip2. You have to use third party > modules like https://github.com/leev/ngx_http_geoip2_module > > On Fri, Sep 21, 2018 at 2:39 PM anish10dec > > wrote: > > > Hi , > > > > As of now we are using > "nginx-module-geoip-1.10.0-1.el7.ngx.x86_64.rpm" > > available at repository > > https://nginx.org/packages/rhel/7/x86_64/RPMS/ > > > > Cant find rpm for geoip2 module . > > > > Please suggest from were to get the rpm package of geoip2 module as > we are > > using nginx-1-10.2 rpm. > > > > Posted at Nginx Forum: > > https://forum.nginx.org/read.php?2,281341,281341#msg-281341 Hello. Is it planned to add a package with the module https://github.com/leev/ngx_http_geoip2_module to the official nginx repository (as it is now for the GeoIP v1 module)? So that the user can safely update nginx through the linux package manager (apt/yum) without rebuilding the third-party module each time from the sources? Or are there any difficulties with this (internal competition with nginx-plus, inappropriate license ngx_http_geoip2_module, etc)? Thank you for the information. Sincerely. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281341,282564#msg-282564 From lagged at gmail.com Tue Jan 8 07:45:46 2019 From: lagged at gmail.com (Andrei) Date: Tue, 8 Jan 2019 09:45:46 +0200 Subject: Cache vs expires time In-Reply-To: <20190107024746.GN99070@mdounin.ru> References: <20190107024746.GN99070@mdounin.ru> Message-ID: Thanks Maxim!! On Mon, Jan 7, 2019 at 4:47 AM Maxim Dounin wrote: > Hello! > > On Sun, Jan 06, 2019 at 03:07:00PM +0200, Andrei wrote: > > > I was wondering how can I force cache of a $request_uri (/abc) for 10 > > minutes, but set the browser expires headers for 5 minutes? > > The most basic options are: > > - You can set Expires from your backend as desired for browser > caching, and use the X-Accel-Expires header to set caching time > for nginx (see http://nginx.org/r/proxy_cache_valid). > > - You can configure nginx to ignore Expires and Cache-Control as > set by your backend (see http://nginx.org/r/proxy_ignore_headers), > so these will be used only by browsers, and set caching time for > nginx manually with proxy_cache_valid (see > http://nginx.org/r/proxy_cache_valid). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lagged at gmail.com Tue Jan 8 07:55:30 2019 From: lagged at gmail.com (Andrei) Date: Tue, 8 Jan 2019 09:55:30 +0200 Subject: Cache vs expires time In-Reply-To: References: <20190107024746.GN99070@mdounin.ru> Message-ID: Is there a way to conditionally use proxy_ignore_headers? I'm trying to only ignore headers for requests which have $skip_cache = 0 for example On Tue, Jan 8, 2019 at 9:45 AM Andrei wrote: > Thanks Maxim!! > > On Mon, Jan 7, 2019 at 4:47 AM Maxim Dounin wrote: > >> Hello! >> >> On Sun, Jan 06, 2019 at 03:07:00PM +0200, Andrei wrote: >> >> > I was wondering how can I force cache of a $request_uri (/abc) for 10 >> > minutes, but set the browser expires headers for 5 minutes? >> >> The most basic options are: >> >> - You can set Expires from your backend as desired for browser >> caching, and use the X-Accel-Expires header to set caching time >> for nginx (see http://nginx.org/r/proxy_cache_valid). >> >> - You can configure nginx to ignore Expires and Cache-Control as >> set by your backend (see http://nginx.org/r/proxy_ignore_headers), >> so these will be used only by browsers, and set caching time for >> nginx manually with proxy_cache_valid (see >> http://nginx.org/r/proxy_cache_valid). >> >> -- >> Maxim Dounin >> http://mdounin.ru/ >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From enderulusoy at gmail.com Tue Jan 8 10:47:39 2019 From: enderulusoy at gmail.com (ender ulusoy) Date: Tue, 8 Jan 2019 13:47:39 +0300 Subject: help needed : catch a url and use in a location block Message-ID: Hello everyone, I have a website www.example.com. I just want to cache only pages includes adv_id. I'm using NGINX as a reverse proxy and 3 servers behind NGINX (Apache). An example URL is here: https://www.example.com/campains/a4/search?page=1&criteria%5Badv_id%5D=2004 I won't use cache on other pages. And I don't want to use an if condition because this page gets high traffic. How can I achieve that? What will be the right regex to catch this URL as a location block? I've tried some regex but can not catch the URL contains adv_id with the given URL format. location ~* ^/(adv_id) {} Doesn't work for example. -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Jan 8 20:25:58 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 8 Jan 2019 20:25:58 +0000 Subject: help needed : catch a url and use in a location block In-Reply-To: References: Message-ID: <20190108202558.lonul6ysr4itb5zd@daoine.org> On Tue, Jan 08, 2019 at 01:47:39PM +0300, ender ulusoy wrote: Hi there, > An example URL is here: > > https://www.example.com/campains/a4/search?page=1&criteria%5Badv_id%5D=2004 > > I won't use cache on other pages. And I don't want to use an if condition > because this page gets high traffic. How can I achieve that? What will be > the right regex to catch this URL as a location block? A "location" match does not involve the query string -- that is, it stops looking before the ?. So no "location" block can match adv_id in the query string. You could do something like if ($args ~ "adv_id") {} to set a variable, and then later cache (or not) based on that variable. But if you don't like "if", you'll want to come up with another way of doing something similar. Perhaps "map"? Does adv_id ever appear outside of the query string, where a "location" could match it? If so, you may want to use $request_uri instead of $args. f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Jan 8 21:32:10 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Tue, 08 Jan 2019 16:32:10 -0500 Subject: GeoIP Question Message-ID: <75cb3b4bcfd37f2e74065dea55f0e5cd.NginxMailingListEnglish@forum.nginx.org> Hi All I would really appreciate some help here. I want to restrict all countries except the US and Jamaica (JM). load_module "/usr/local/libexec/nginx/ngx_http_geoip_module.so"; These are my entries in nginx.conf Under http map $geoip_country_code $country_access { "US" 0; "JM" 0; default 1; } Under HTTP Server if ($country_access = '1') { return 403; } Under HTTPS Server server { if ($country_access = '1') { return 403; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282589,282589#msg-282589 From nginx-forum at forum.nginx.org Tue Jan 8 21:34:15 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Tue, 08 Jan 2019 16:34:15 -0500 Subject: GeoIP Question In-Reply-To: <75cb3b4bcfd37f2e74065dea55f0e5cd.NginxMailingListEnglish@forum.nginx.org> References: <75cb3b4bcfd37f2e74065dea55f0e5cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: Ugh. I clicked too quickly. Sorry. When I test this setup, I get a forbidden in the US and a programmer in Jamaica gets the same thing. Can't figure out why. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282589,282591#msg-282591 From nginx-forum at forum.nginx.org Tue Jan 8 22:36:15 2019 From: nginx-forum at forum.nginx.org (kycedbi) Date: Tue, 08 Jan 2019 17:36:15 -0500 Subject: GeoIP Question In-Reply-To: References: <75cb3b4bcfd37f2e74065dea55f0e5cd.NginxMailingListEnglish@forum.nginx.org> Message-ID: <6bf36ed16cc870f5b91fbfa98408e7ff.NginxMailingListEnglish@forum.nginx.org> You did not tell nginx where the geoip file is located. https://nginx.org/docs/http/ngx_http_geoip_module.html#geoip_country geoip_country /usr/share/GeoIP/GeoIP.dat; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282589,282593#msg-282593 From lists at lazygranch.com Wed Jan 9 03:30:44 2019 From: lists at lazygranch.com (lists at lazygranch.com) Date: Tue, 8 Jan 2019 19:30:44 -0800 Subject: =?UTF-8?Q?I_need_my_=E2=80=9Cbad_user_agent=E2=80=9D_map_not_to_block_my_r?= =?UTF-8?Q?ss_xml_file?= Message-ID: <20190108193044.6c1a26a7.lists@lazygranch.com> Stripping down the nginx.conf file: server{ location / { root /usr/share/nginx/html/mydomain/public_html; if ($badagent) { return 403; } } location = /feeds { try_files $uri $uri.xml $uri/ ; } } The "=" should force an exact match, but the badagent map is checked. From francis at daoine.org Wed Jan 9 08:20:05 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 9 Jan 2019 08:20:05 +0000 Subject: =?UTF-8?Q?Re=3A_I_need_my_=E2=80=9Cbad_user_agent=E2=80=9D_map_not_to_bloc?= =?UTF-8?Q?k_my_rss_xml_file?= In-Reply-To: <20190108193044.6c1a26a7.lists@lazygranch.com> References: <20190108193044.6c1a26a7.lists@lazygranch.com> Message-ID: <20190109082005.734l7ulotv2im6vp@daoine.org> On Tue, Jan 08, 2019 at 07:30:44PM -0800, lists at lazygranch.com wrote: Hi there, > Stripping down the nginx.conf file: > > server{ > location / { > root /usr/share/nginx/html/mydomain/public_html; > if ($badagent) { return 403; } > } > location = /feeds { > try_files $uri $uri.xml $uri/ ; > } > } > The "=" should force an exact match, but the badagent map is checked. What file on your filesystem is your rss xml file? Is it something other than /usr/local/nginx/html/feeds or /usr/local/nginx/html/feeds.xml? And what request do you make to fetch your rss xml file? Do things change if you move the "root" directive out of the "location" block so that it is directly in the "server" block? f -- Francis Daly francis at daoine.org From ottavio at campana.vi.it Wed Jan 9 09:52:11 2019 From: ottavio at campana.vi.it (Ottavio Campana) Date: Wed, 9 Jan 2019 10:52:11 +0100 Subject: Is there a way to get the raw request from the client in a handler? Message-ID: Hello, I am proceeding developing my module. Is there a way to get the raw HTTP request from a ngx_http_request_t ? Thank you, Ottavio -- Non c'? pi? forza nella normalit?, c'? solo monotonia -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 9 13:23:05 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jan 2019 16:23:05 +0300 Subject: Is there a way to get the raw request from the client in a handler? In-Reply-To: References: Message-ID: <20190109132305.GQ99070@mdounin.ru> Hello! On Wed, Jan 09, 2019 at 10:52:11AM +0100, Ottavio Campana wrote: > I am proceeding developing my module. > > Is there a way to get the raw HTTP request from a ngx_http_request_t ? No. E.g., there is no such thing as "raw HTTP request" when using HTTP/2. -- Maxim Dounin http://mdounin.ru/ From ottavio at campana.vi.it Wed Jan 9 14:16:16 2019 From: ottavio at campana.vi.it (Ottavio Campana) Date: Wed, 9 Jan 2019 15:16:16 +0100 Subject: Is there a way to get the raw request from the client in a handler? In-Reply-To: <20190109132305.GQ99070@mdounin.ru> References: <20190109132305.GQ99070@mdounin.ru> Message-ID: Dear Maxim, in the nginx documentation http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request I find $request full original request line How is this variable populated? I was expecting that this variable is taken from an internal buffer. Or is it reconstructed? Thank you, Ottavio Il giorno mer 9 gen 2019 alle ore 14:23 Maxim Dounin ha scritto: > Hello! > > On Wed, Jan 09, 2019 at 10:52:11AM +0100, Ottavio Campana wrote: > > > I am proceeding developing my module. > > > > Is there a way to get the raw HTTP request from a ngx_http_request_t ? > > No. E.g., there is no such thing as "raw HTTP request" when using > HTTP/2. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Non c'? pi? forza nella normalit?, c'? solo monotonia -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Jan 9 17:00:00 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jan 2019 20:00:00 +0300 Subject: Cache vs expires time In-Reply-To: References: <20190107024746.GN99070@mdounin.ru> Message-ID: <20190109170000.GS99070@mdounin.ru> Hello! On Tue, Jan 08, 2019 at 09:55:30AM +0200, Andrei wrote: > Is there a way to conditionally use proxy_ignore_headers? I'm trying to > only ignore headers for requests which have $skip_cache = 0 for example If you want different proxy_ignore_headers settings for different requests, you have to use different location{} blocks for these requests. You can do so either by using distinct path-based locations, or by conditionally routing some requests to a different location (e.g., with the "rewrite" directive). In the particular case of requests to /abc, consider something like this: location = /abc { proxy_pass ... proxy_cache ... proxy_ignore_headers Expires Cache-Control; proxy_cache_valid 5m; } Note well that it makes little to no sense to only ignore Expires and Cache-Control on cached requests, since these headers are only used by nginx for caching. If caching is not used, these headers are ignored anyway. See http://nginx.org/r/proxy_ignore_headers for details. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Jan 9 17:18:48 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 9 Jan 2019 20:18:48 +0300 Subject: Is there a way to get the raw request from the client in a handler? In-Reply-To: References: <20190109132305.GQ99070@mdounin.ru> Message-ID: <20190109171848.GT99070@mdounin.ru> Hello! On Wed, Jan 09, 2019 at 03:16:16PM +0100, Ottavio Campana wrote: > Dear Maxim, > > in the nginx documentation > http://nginx.org/en/docs/http/ngx_http_core_module.html#var_request I find > > $request full original request line > > How is this variable populated? I was expecting that this variable is taken > from an internal buffer. Or is it reconstructed? For HTTP/1.x (and HTTP/0.9), a reference to the request line is explicitly saved in the r->request_line field when parsing a request, see ngx_http_process_request_line() in src/http/ngx_http_request.c. For HTTP/2, it is reconstructed. See ngx_http_v2_construct_request_line() in src/http/v2/ngx_http_v2.c. -- Maxim Dounin http://mdounin.ru/ From lists at lazygranch.com Thu Jan 10 02:14:04 2019 From: lists at lazygranch.com (lists at lazygranch.com) Date: Wed, 9 Jan 2019 18:14:04 -0800 Subject: =?UTF-8?Q?Re=3A_I_need_my_=E2=80=9Cbad_user_agent=E2=80=9D_map_not_to_bloc?= =?UTF-8?Q?k_my_rss_xml_file?= In-Reply-To: <20190109082005.734l7ulotv2im6vp@daoine.org> References: <20190108193044.6c1a26a7.lists@lazygranch.com> <20190109082005.734l7ulotv2im6vp@daoine.org> Message-ID: <20190109181404.72ee0f9f.lists@lazygranch.com> On Wed, 9 Jan 2019 08:20:05 +0000 Francis Daly wrote: > On Tue, Jan 08, 2019 at 07:30:44PM -0800, lists at lazygranch.com wrote: > > Hi there, > > > Stripping down the nginx.conf file: > > > > server{ > > location / { > > root /usr/share/nginx/html/mydomain/public_html; > > if ($badagent) { return 403; } > > } > > location = /feeds { > > try_files $uri $uri.xml $uri/ ; > > } > > } > > The "=" should force an exact match, but the badagent map is > > checked. > > What file on your filesystem is your rss xml file? > > Is it something other than /usr/local/nginx/html/feeds or > /usr/local/nginx/html/feeds.xml? > > And what request do you make to fetch your rss xml file? > > Do things change if you move the "root" directive out of the > "location" block so that it is directly in the "server" block? > > f Good catch on the root declaration. Actually I declared it twice. Once under server and once under location. I got rid of the declaration under location since that is the wrong place. So it is now: server{ root /usr/share/nginx/html/mydomain/public_html; location / { if ($badagent) { return 403; } } location = /feeds { try_files $uri $uri.xml $uri/ ; } } The "=" should force an exact match, but the badagent map is checked. Absolutely the badagent check under location / is being triggered. Everything works if I comment out the check. The URL to request the XML file is domain.com/feeds/file.xml . It is located in /usr/share/nginx/html/mydomain/public_html/feeds . Here is the access.log file. First line is with the badagent check skipped. Second line is with it enable. 200 xxx.58.22.151 - - [10/Jan/2019:02:07:42 +0000] "GET /feeds/file.xml HTTP/1.1" 3614 "-" "-" "-" 403 xxx.58.22.151 - - [10/Jan/2019:02:08:38 +0000] "GET /feeds/file.xml HTTP/1.1" 169 "-" "-" "-" I'm using the RSS reader Akregator in this case. Some readers work fine since they act more like browsers. From anoopalias01 at gmail.com Thu Jan 10 02:57:08 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 10 Jan 2019 08:27:08 +0530 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state Message-ID: Hi, Have had a really strange issue on a Nginx server configured as a reverse proxy wherein the server stops responding when the network connections in ESTABLISHED state and FIN_WAIT state in very high compared to normal working If you see the below network graph, at around 00:30 hours there is a big spike in network connections in FIN_WAIT state, to around 12000 from the normal value of ~20 https://i.imgur.com/wb6VMWo.png At this state, Nginx stops responding fully and does not work even after a full restart of the service. Switching off Nginx and bring Apache service to the frontend (removing the reverse proxy) fix this and the connections drop Nginx config & build setting ################################## nginx -V nginx version: nginx/1.15.8 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) built with LibreSSL 2.8.3 TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/dev/shm/client_temp --http-proxy-temp-path=/dev/shm/proxy_temp --http-fastcgi-temp-path=/dev/shm/fastcgi_temp --http-uwsgi-temp-path=/dev/shm/uwsgi_temp --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ngx_http_geoip2_module --add-dynamic-module=testcookie-nginx-module --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E ##################################### # worker_processes auto; #Set to auto for a powerful server worker_processes 1; worker_rlimit_nofile 69152; worker_shutdown_timeout 10s; # worker_cpu_affinity auto; timer_resolution 1s; thread_pool iopool threads=32 max_queue=65536; pcre_jit on; pid /var/run/nginx.pid; error_log /var/log/nginx/error_log; #Load Dynamic Modules include /etc/nginx/modules.d/*.load; events { worker_connections 20480; use epoll; multi_accept on; accept_mutex off; } lingering_close off; limit_req zone=FLOODVHOST burst=200; limit_req zone=FLOODPROTECT burst=200; limit_conn PERSERVER 60; client_header_timeout 5s; client_body_timeout 5s; send_timeout 5s; keepalive_timeout 0; http2_idle_timeout 20s; http2_recv_timeout 20s; aio threads=iopool; aio_write on; directio 64m; output_buffers 2 512k; tcp_nodelay on; types_hash_max_size 4096; server_tokens off; client_max_body_size 2048m; reset_timedout_connection on; #Proxy proxy_read_timeout 300; proxy_send_timeout 300; proxy_connect_timeout 30s; #FastCGI fastcgi_read_timeout 300; fastcgi_send_timeout 300; fastcgi_connect_timeout 30s; #Proxy Buffer proxy_buffering on; proxy_buffer_size 128k; proxy_buffers 8 128k; proxy_busy_buffers_size 256k; #FastCGI Buffer fastcgi_buffer_size 128k; fastcgi_buffers 8 128k; fastcgi_busy_buffers_size 256k; server_names_hash_max_size 2097152; server_names_hash_bucket_size 128; ###################################################### -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Thu Jan 10 06:03:06 2019 From: peter_booth at me.com (Peter Booth) Date: Thu, 10 Jan 2019 01:03:06 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <452D8147-1D2F-4FE7-966E-1EF6469D9A20@me.com> The important question here is not the connections in FIN_WAIT. It?s ?why do you have so many sockets in ESTABLISHED state?? First thing to do is to run netstat -ant | grep tcp and see where these connections are to. Do you have a configuration that is causing an endless loop of requests? Sent from my iPhone > On Jan 9, 2019, at 9:57 PM, Anoop Alias wrote: > > Hi, > > Have had a really strange issue on a Nginx server configured as a reverse proxy wherein the server stops responding when the network connections in ESTABLISHED state and FIN_WAIT state in very high compared to normal working > > If you see the below network graph, at around 00:30 hours there is a big spike in network connections in FIN_WAIT state, to around 12000 from the normal value of ~20 > > https://i.imgur.com/wb6VMWo.png > > At this state, Nginx stops responding fully and does not work even after a full restart of the service. > > Switching off Nginx and bring Apache service to the frontend (removing the reverse proxy) fix this and the connections drop > > Nginx config & build setting > ################################## > nginx -V > nginx version: nginx/1.15.8 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) > built with LibreSSL 2.8.3 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/dev/shm/client_temp --http-proxy-temp-path=/dev/shm/proxy_temp --http-fastcgi-temp-path=/dev/shm/fastcgi_temp --http-uwsgi-temp-path=/dev/shm/uwsgi_temp --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ngx_http_geoip2_module --add-dynamic-module=testcookie-nginx-module --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E > > ##################################### > # worker_processes auto; #Set to auto for a powerful server > worker_processes 1; > worker_rlimit_nofile 69152; > worker_shutdown_timeout 10s; > # worker_cpu_affinity auto; > timer_resolution 1s; > thread_pool iopool threads=32 max_queue=65536; > pcre_jit on; > pid /var/run/nginx.pid; > error_log /var/log/nginx/error_log; > > #Load Dynamic Modules > include /etc/nginx/modules.d/*.load; > > > events { > worker_connections 20480; > use epoll; > multi_accept on; > accept_mutex off; > } > > lingering_close off; > limit_req zone=FLOODVHOST burst=200; > limit_req zone=FLOODPROTECT burst=200; > limit_conn PERSERVER 60; > client_header_timeout 5s; > client_body_timeout 5s; > send_timeout 5s; > keepalive_timeout 0; > http2_idle_timeout 20s; > http2_recv_timeout 20s; > > > aio threads=iopool; > aio_write on; > directio 64m; > output_buffers 2 512k; > > tcp_nodelay on; > > types_hash_max_size 4096; > server_tokens off; > client_max_body_size 2048m; > reset_timedout_connection on; > > #Proxy > proxy_read_timeout 300; > proxy_send_timeout 300; > proxy_connect_timeout 30s; > > #FastCGI > fastcgi_read_timeout 300; > fastcgi_send_timeout 300; > fastcgi_connect_timeout 30s; > > #Proxy Buffer > proxy_buffering on; > proxy_buffer_size 128k; > proxy_buffers 8 128k; > proxy_busy_buffers_size 256k; > > #FastCGI Buffer > fastcgi_buffer_size 128k; > fastcgi_buffers 8 128k; > fastcgi_busy_buffers_size 256k; > > server_names_hash_max_size 2097152; > server_names_hash_bucket_size 128; > ###################################################### > > > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 10 07:54:14 2019 From: nginx-forum at forum.nginx.org (nevereturn01) Date: Thu, 10 Jan 2019 02:54:14 -0500 Subject: Use sub-url to identify the different server Message-ID: <3b5c0b7004b9067a9d4356c602cf4aed.NginxMailingListEnglish@forum.nginx.org> Hi experts, I have 2 internal web hosts & 1 dedicate Nginx as reverse proxy, eg 10.1.1.1 & 10.1.1.2 Now, I need to access the different hosts via sub-url. eg: 1. https://www.domain.com/site1 -----> https://10.1.1.1/ https://www.domain.com/site1/ui -----> https://10.1.1.1/ui/ 2. https://www.domain.com/site2 -----> https://10.1.1.2/ https://www.domain.com/site2/ui -----> https://10.1.1.2/ui/ Any suggestions? Thanks in advance Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282615,282615#msg-282615 From lagged at gmail.com Thu Jan 10 08:32:28 2019 From: lagged at gmail.com (Andrei) Date: Thu, 10 Jan 2019 10:32:28 +0200 Subject: Cache vs expires time In-Reply-To: <20190109170000.GS99070@mdounin.ru> References: <20190107024746.GN99070@mdounin.ru> <20190109170000.GS99070@mdounin.ru> Message-ID: Hello! Thanks again for the pointers. I have caching enabled, and the purpose of this is to set different expire times based on the request (if it's cacheable). So I have 3 locations: 1 for frontpage, 1 for dynamic pages and another for static content. I can't use your example though because it will ignore those headers even for requests which shouldn't be cached, hence the $skip_cache variable check. Is there a way to tie checking a variable value to the ignore headers method? On Wed, Jan 9, 2019, 19:00 Maxim Dounin wrote: > Hello! > > On Tue, Jan 08, 2019 at 09:55:30AM +0200, Andrei wrote: > > > Is there a way to conditionally use proxy_ignore_headers? I'm trying to > > only ignore headers for requests which have $skip_cache = 0 for example > > If you want different proxy_ignore_headers settings for > different requests, you have to use different location{} blocks > for these requests. You can do so either by using distinct > path-based locations, or by conditionally routing some requests to > a different location (e.g., with the "rewrite" directive). > > In the particular case of requests to /abc, consider something > like this: > > location = /abc { > proxy_pass ... > proxy_cache ... > proxy_ignore_headers Expires Cache-Control; > proxy_cache_valid 5m; > } > > Note well that it makes little to no sense to only ignore Expires > and Cache-Control on cached requests, since these headers are only > used by nginx for caching. If caching is not used, these headers > are ignored anyway. See http://nginx.org/r/proxy_ignore_headers > for details. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Jan 10 08:50:33 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 10 Jan 2019 08:50:33 +0000 Subject: =?UTF-8?Q?Re=3A_I_need_my_=E2=80=9Cbad_user_agent=E2=80=9D_map_not_to_bloc?= =?UTF-8?Q?k_my_rss_xml_file?= In-Reply-To: <20190109181404.72ee0f9f.lists@lazygranch.com> References: <20190108193044.6c1a26a7.lists@lazygranch.com> <20190109082005.734l7ulotv2im6vp@daoine.org> <20190109181404.72ee0f9f.lists@lazygranch.com> Message-ID: <20190110085033.z5uctkk5hnthebkn@daoine.org> On Wed, Jan 09, 2019 at 06:14:04PM -0800, lists at lazygranch.com wrote: Hi there, > location / { > if ($badagent) { return 403; } > } > location = /feeds { > try_files $uri $uri.xml $uri/ ; > } > The "=" should force an exact match, but the badagent map is > checked. > > Absolutely the badagent check under location / is being triggered. > Everything works if I comment out the check. > > The URL to request the XML file is domain.com/feeds/file.xml . If the request is /feeds/file.xml, that will not exactly match "/feeds". location = /feeds/file.xml {} should serve the file feeds/file.xml below the document root. Or, if you want to handle all requests that start with /feeds/ in a similar way, location /feeds/ {} or location ^~ /feeds/ {} should do that. (The two are different if you have regex locations in the config.) f -- Francis Daly francis at daoine.org From zn1314 at 126.com Thu Jan 10 09:14:17 2019 From: zn1314 at 126.com (David Ni) Date: Thu, 10 Jan 2019 17:14:17 +0800 (CST) Subject: share cookies between servers Message-ID: <7c4f60dd.75ec.168370a6e4d.Coremail.zn1314@126.com> Hi Experts, I have one requirement right now,we are using nginx with ldap auth,and I create many servers like datanode02.bddev.test.net datanode03.bddev.test.net,so if I access these servers ,we need to input the correct username and password which stored in ldap,my requirement is that whether datanode02.bddev.test.net datanode03.bddev.test.net can share cookies between each other,so that if I have accessed datanode02.bddev.test.net successfully,I don't need to input username and password when I access datanode03.bddev.test.net ,is this possible? If possible,how to achieve this?Thanks very much! server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://dev-datanode02:8042/; more_clear_headers "X-Frame-options"; sub_filter dev-resourcemanager01:8088 resourcemanager01rsm.bddev.test.net:80; sub_filter dev-historyserver01:8088 historyserver01rsm.bddev.virtueit.net:80; sub_filter dev-historyserver01:19888 historyserver01ht.bddev.virtueit.net:80; sub_filter_types *; sub_filter_once off; } } server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://dev-datanode03:8042/; more_clear_headers "X-Frame-options"; sub_filter dev-resourcemanager01:8088 resourcemanager01rsm.bddev.test.net:80; sub_filter dev-historyserver01:8088 historyserver01rsm.bddev.virtueit.net:80; sub_filter dev-historyserver01:19888 historyserver01ht.bddev.virtueit.net:80; sub_filter_types *; sub_filter_once off; -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Thu Jan 10 13:05:15 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Jan 2019 16:05:15 +0300 Subject: Cache vs expires time In-Reply-To: References: <20190107024746.GN99070@mdounin.ru> <20190109170000.GS99070@mdounin.ru> Message-ID: <20190110130515.GW99070@mdounin.ru> Hello! On Thu, Jan 10, 2019 at 10:32:28AM +0200, Andrei wrote: > Thanks again for the pointers. I have caching enabled, and the purpose of > this is to set different expire times based on the request (if it's > cacheable). So I have 3 locations: 1 for frontpage, 1 for dynamic pages and > another for static content. I can't use your example though because it will > ignore those headers even for requests which shouldn't be cached, hence the > $skip_cache variable check. Is there a way to tie checking a variable value > to the ignore headers method? Let me re-iterate: > > Note well that it makes little to no sense to only ignore Expires > > and Cache-Control on cached requests, since these headers are only > > used by nginx for caching. If caching is not used, these headers > > are ignored anyway. See http://nginx.org/r/proxy_ignore_headers > > for details. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Thu Jan 10 13:45:31 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 10 Jan 2019 16:45:31 +0300 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <20190110134531.GY99070@mdounin.ru> Hello! On Thu, Jan 10, 2019 at 08:27:08AM +0530, Anoop Alias wrote: > Have had a really strange issue on a Nginx server configured as a reverse > proxy wherein the server stops responding when the network connections in > ESTABLISHED state and FIN_WAIT state in very high compared to normal > working > > If you see the below network graph, at around 00:30 hours there is a big > spike in network connections in FIN_WAIT state, to around 12000 from the > normal value of ~20 > > https://i.imgur.com/wb6VMWo.png > > At this state, Nginx stops responding fully and does not work even after a > full restart of the service. >From the image it looks like there are CLOSE_WAIT sockets, not FIN_WAIT. Most likely this means that nginx was blocked on something - probably system resources somewhere inside the kernel, or disk operation (on a dead disk?), or something like this. E.g., this is something trivial to observe if you are serving files from an NFS share, and the NFS server dies. Alternatively, this might be a bug somewhere. In particular, it looks like you are using various 3rd party modules, and a bug in any of them can cause similar problems. But I don't really think this is the case, as restarting nginx usually fixes such problems. -- Maxim Dounin http://mdounin.ru/ From anoopalias01 at gmail.com Thu Jan 10 18:04:35 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 10 Jan 2019 23:34:35 +0530 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: The issue was identified to be an enormous number of http request ( attack) to one of the hosted domains that was using cloudflare. The traffic is coming in from cloudflare and this was causing nginx to be exhausted in terms of the TCP stack ######################################### # netstat -tn|awk '{print $6}'|sort|uniq -c 1 19922 CLOSE_WAIT 2 CLOSING 23528 ESTABLISHED 17785 FIN_WAIT1 4 FIN_WAIT2 1 Foreign 17 LAST_ACK 904 SYN_RECV 14 SYN_SENT 142 TIME_WAIT ############################################ Interestingly with the same attack, removing Nginx from the picture and exposing httpd cause the connections to be fine ############################################ ]# netstat -tn|awk '{print $6}'|sort|uniq -c 1 39 CLOSE_WAIT 9 CLOSING 664 ESTABLISHED 13 FIN_WAIT1 48 FIN_WAIT2 1 Foreign 24 LAST_ACK 8 SYN_RECV 12 SYN_SENT 1137 TIME_WAIT ############################################## Although the load is a bit high than usual. It looks like the TCP connections in the established state is somehow piling up with Nginx Number of established connections over time with nginx ############## 535 ESTABLISHED 1195 ESTABLISHED 23437 ESTABLISHED 23490 ESTABLISHED 23482 ESTABLISHED 389 ESTABLISHED ############## I think this could be a misconfiguration in Nginx?. Would be great if someone points out what is wrong with the config Thanks, On Thu, Jan 10, 2019 at 8:27 AM Anoop Alias wrote: > Hi, > > Have had a really strange issue on a Nginx server configured as a reverse > proxy wherein the server stops responding when the network connections in > ESTABLISHED state and FIN_WAIT state in very high compared to normal > working > > If you see the below network graph, at around 00:30 hours there is a big > spike in network connections in FIN_WAIT state, to around 12000 from the > normal value of ~20 > > https://i.imgur.com/wb6VMWo.png > > At this state, Nginx stops responding fully and does not work even after a > full restart of the service. > > Switching off Nginx and bring Apache service to the frontend (removing the > reverse proxy) fix this and the connections drop > > Nginx config & build setting > ################################## > nginx -V > nginx version: nginx/1.15.8 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) > built with LibreSSL 2.8.3 > TLS SNI support enabled > configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit > --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3 > --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log > --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid > --lock-path=/var/run/nginx.lock > --http-client-body-temp-path=/dev/shm/client_temp > --http-proxy-temp-path=/dev/shm/proxy_temp > --http-fastcgi-temp-path=/dev/shm/fastcgi_temp > --http-uwsgi-temp-path=/dev/shm/uwsgi_temp > --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody > --with-http_ssl_module --with-http_realip_module > --with-http_addition_module --with-http_sub_module --with-http_dav_module > --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module > --with-http_gzip_static_module --with-http_random_index_module > --with-http_secure_link_module --with-http_stub_status_module > --with-http_auth_request_module --with-file-aio --with-threads > --with-stream --with-stream_ssl_module --with-http_slice_module > --with-compat --with-http_v2_module > --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable > --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module > --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61 > --add-dynamic-module=headers-more-nginx-module-0.32 > --add-dynamic-module=ngx_http_redis-0.3.8 > --add-dynamic-module=redis2-nginx-module > --add-dynamic-module=srcache-nginx-module-0.31 > --add-dynamic-module=ngx_devel_kit-0.3.0 > --add-dynamic-module=set-misc-nginx-module-0.31 > --add-dynamic-module=ngx_http_geoip2_module > --add-dynamic-module=testcookie-nginx-module > --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall > -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong > --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' > --with-ld-opt=-Wl,-E > > ##################################### > # worker_processes auto; #Set to auto for a powerful server > worker_processes 1; > worker_rlimit_nofile 69152; > worker_shutdown_timeout 10s; > # worker_cpu_affinity auto; > timer_resolution 1s; > thread_pool iopool threads=32 max_queue=65536; > pcre_jit on; > pid /var/run/nginx.pid; > error_log /var/log/nginx/error_log; > > #Load Dynamic Modules > include /etc/nginx/modules.d/*.load; > > > events { > worker_connections 20480; > use epoll; > multi_accept on; > accept_mutex off; > } > > lingering_close off; > limit_req zone=FLOODVHOST burst=200; > limit_req zone=FLOODPROTECT burst=200; > limit_conn PERSERVER 60; > client_header_timeout 5s; > client_body_timeout 5s; > send_timeout 5s; > keepalive_timeout 0; > http2_idle_timeout 20s; > http2_recv_timeout 20s; > > > aio threads=iopool; > aio_write on; > directio 64m; > output_buffers 2 512k; > > tcp_nodelay on; > > types_hash_max_size 4096; > server_tokens off; > client_max_body_size 2048m; > reset_timedout_connection on; > > #Proxy > proxy_read_timeout 300; > proxy_send_timeout 300; > proxy_connect_timeout 30s; > > #FastCGI > fastcgi_read_timeout 300; > fastcgi_send_timeout 300; > fastcgi_connect_timeout 30s; > > #Proxy Buffer > proxy_buffering on; > proxy_buffer_size 128k; > proxy_buffers 8 128k; > proxy_busy_buffers_size 256k; > > #FastCGI Buffer > fastcgi_buffer_size 128k; > fastcgi_buffers 8 128k; > fastcgi_busy_buffers_size 256k; > > server_names_hash_max_size 2097152; > server_names_hash_bucket_size 128; > ###################################################### > > > > -- > *Anoop P Alias* > > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Thu Jan 10 18:25:13 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Thu, 10 Jan 2019 23:55:13 +0530 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: This server is not using network drives and the only thing I can think of is the temp paths set to /dev/shm --http-client-body-temp-path=/dev/shm/client_temp --http-proxy-temp-path=/dev/shm/proxy_temp --http-fastcgi-temp-path=/dev/shm/fastcgi_temp --http-uwsgi-temp-path=/dev/shm/uwsgi_temp --http-scgi-temp-path=/dev/shm/scgi_temp Could this be causing an issue? The domain under the attack is set to proxy to httpd and would surely be using http-client-body-temp-path and http-proxy-temp-path Although the system is quite beefy in terms of cpu and ram # df -h|grep shm tmpfs 63G 7.2M 63G 1% /dev/shm On Thu, Jan 10, 2019 at 11:34 PM Anoop Alias wrote: > The issue was identified to be an enormous number of http request ( > attack) to one of the hosted domains that was using cloudflare. The traffic > is coming in from cloudflare and this was causing nginx to be exhausted in > terms of the TCP stack > > ######################################### > # netstat -tn|awk '{print $6}'|sort|uniq -c > 1 > 19922 CLOSE_WAIT > 2 CLOSING > 23528 ESTABLISHED > 17785 FIN_WAIT1 > 4 FIN_WAIT2 > 1 Foreign > 17 LAST_ACK > 904 SYN_RECV > 14 SYN_SENT > 142 TIME_WAIT > ############################################ > > Interestingly with the same attack, removing Nginx from the picture and > exposing httpd cause the connections to be fine > > ############################################ > ]# netstat -tn|awk '{print $6}'|sort|uniq -c > 1 > 39 CLOSE_WAIT > 9 CLOSING > 664 ESTABLISHED > 13 FIN_WAIT1 > 48 FIN_WAIT2 > 1 Foreign > 24 LAST_ACK > 8 SYN_RECV > 12 SYN_SENT > 1137 TIME_WAIT > ############################################## > > Although the load is a bit high than usual. > > It looks like the TCP connections in the established state is somehow > piling up with Nginx > > Number of established connections over time with nginx > ############## > 535 ESTABLISHED > 1195 ESTABLISHED > 23437 ESTABLISHED > 23490 ESTABLISHED > 23482 ESTABLISHED > 389 ESTABLISHED > ############## > > I think this could be a misconfiguration in Nginx?. Would be great if > someone points out what is wrong with the config > > Thanks, > > > On Thu, Jan 10, 2019 at 8:27 AM Anoop Alias > wrote: > >> Hi, >> >> Have had a really strange issue on a Nginx server configured as a reverse >> proxy wherein the server stops responding when the network connections in >> ESTABLISHED state and FIN_WAIT state in very high compared to normal >> working >> >> If you see the below network graph, at around 00:30 hours there is a big >> spike in network connections in FIN_WAIT state, to around 12000 from the >> normal value of ~20 >> >> https://i.imgur.com/wb6VMWo.png >> >> At this state, Nginx stops responding fully and does not work even after >> a full restart of the service. >> >> Switching off Nginx and bring Apache service to the frontend (removing >> the reverse proxy) fix this and the connections drop >> >> Nginx config & build setting >> ################################## >> nginx -V >> nginx version: nginx/1.15.8 >> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) >> built with LibreSSL 2.8.3 >> TLS SNI support enabled >> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx >> --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit >> --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3 >> --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log >> --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid >> --lock-path=/var/run/nginx.lock >> --http-client-body-temp-path=/dev/shm/client_temp >> --http-proxy-temp-path=/dev/shm/proxy_temp >> --http-fastcgi-temp-path=/dev/shm/fastcgi_temp >> --http-uwsgi-temp-path=/dev/shm/uwsgi_temp >> --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody >> --with-http_ssl_module --with-http_realip_module >> --with-http_addition_module --with-http_sub_module --with-http_dav_module >> --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module >> --with-http_gzip_static_module --with-http_random_index_module >> --with-http_secure_link_module --with-http_stub_status_module >> --with-http_auth_request_module --with-file-aio --with-threads >> --with-stream --with-stream_ssl_module --with-http_slice_module >> --with-compat --with-http_v2_module >> --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable >> --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module >> --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61 >> --add-dynamic-module=headers-more-nginx-module-0.32 >> --add-dynamic-module=ngx_http_redis-0.3.8 >> --add-dynamic-module=redis2-nginx-module >> --add-dynamic-module=srcache-nginx-module-0.31 >> --add-dynamic-module=ngx_devel_kit-0.3.0 >> --add-dynamic-module=set-misc-nginx-module-0.31 >> --add-dynamic-module=ngx_http_geoip2_module >> --add-dynamic-module=testcookie-nginx-module >> --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall >> -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong >> --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' >> --with-ld-opt=-Wl,-E >> >> ##################################### >> # worker_processes auto; #Set to auto for a powerful server >> worker_processes 1; >> worker_rlimit_nofile 69152; >> worker_shutdown_timeout 10s; >> # worker_cpu_affinity auto; >> timer_resolution 1s; >> thread_pool iopool threads=32 max_queue=65536; >> pcre_jit on; >> pid /var/run/nginx.pid; >> error_log /var/log/nginx/error_log; >> >> #Load Dynamic Modules >> include /etc/nginx/modules.d/*.load; >> >> >> events { >> worker_connections 20480; >> use epoll; >> multi_accept on; >> accept_mutex off; >> } >> >> lingering_close off; >> limit_req zone=FLOODVHOST burst=200; >> limit_req zone=FLOODPROTECT burst=200; >> limit_conn PERSERVER 60; >> client_header_timeout 5s; >> client_body_timeout 5s; >> send_timeout 5s; >> keepalive_timeout 0; >> http2_idle_timeout 20s; >> http2_recv_timeout 20s; >> >> >> aio threads=iopool; >> aio_write on; >> directio 64m; >> output_buffers 2 512k; >> >> tcp_nodelay on; >> >> types_hash_max_size 4096; >> server_tokens off; >> client_max_body_size 2048m; >> reset_timedout_connection on; >> >> #Proxy >> proxy_read_timeout 300; >> proxy_send_timeout 300; >> proxy_connect_timeout 30s; >> >> #FastCGI >> fastcgi_read_timeout 300; >> fastcgi_send_timeout 300; >> fastcgi_connect_timeout 30s; >> >> #Proxy Buffer >> proxy_buffering on; >> proxy_buffer_size 128k; >> proxy_buffers 8 128k; >> proxy_busy_buffers_size 256k; >> >> #FastCGI Buffer >> fastcgi_buffer_size 128k; >> fastcgi_buffers 8 128k; >> fastcgi_busy_buffers_size 256k; >> >> server_names_hash_max_size 2097152; >> server_names_hash_bucket_size 128; >> ###################################################### >> >> >> >> -- >> *Anoop P Alias* >> >> > > -- > *Anoop P Alias* > > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 10 19:47:27 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Thu, 10 Jan 2019 14:47:27 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: I have more info on the system state at the time the CLOSE_WAIT connections went sky rocketing Memory ####################################### KiB Mem : 13174569+total, 8684164 free, 28138264 used, 94923264 buff/cache KiB Swap: 4194300 total, 4194300 free, 0 used. 86984112 avail Mem ######################################## CPU ############################################# top - 16:52:02 up 4 days, 5:37, 1 user, load average: 3.68, 5.14, 6.06 Tasks: 935 total, 3 running, 932 sleeping, 0 stopped, 0 zombie %Cpu(s): 22.9 us, 8.0 sy, 0.0 ni, 66.2 id, 2.3 wa, 0.0 hi, 0.6 si, 0.0 st ############################################## PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 3825721 nobody 20 0 3106072 2.4g 3960 R 105.6 1.9 0:57.32 nginx: worker process Nginx worker process was using 105% CPU !---------------------------------- netstat by state 16234 CLOSE_WAIT 6 CLOSING 1195 ESTABLISHED 5 FIN_WAIT1 22 FIN_WAIT2 14 LAST_ACK 63 LISTEN 447 SYN_RECV 10 SYN_SENT 1875 TIME_WAIT !---------------------------------- netstat by state 20139 CLOSE_WAIT 2 CLOSING 23482 ESTABLISHED 17703 FIN_WAIT1 4 FIN_WAIT2 19 LAST_ACK 63 LISTEN 863 SYN_RECV 14 SYN_SENT 130 TIME_WAIT Nginx Process ################################### root 1724190 1.9 1.9 2835060 2555304 ? Ss 10:17 7:48 nginx: master process /usr/sbin/nginx -c /etc/nginx/nginx.conf nobody 3825721 10.6 1.9 3106072 2561764 ? Rl 16:42 1:02 \_ nginx: worker process nobody 3825723 0.0 1.9 2835220 2549460 ? S 16:42 0:00 \_ nginx: cache manager process #################################### As soon as Nginx was stopped and apache was made the frontend web server the connections changed as below ############ !---------------------------------- netstat by state 1 CLOSE_WAIT 2 CLOSING 389 ESTABLISHED 35111 FIN_WAIT1 2 FIN_WAIT2 44 LAST_ACK 42 LISTEN 1 SYN_RECV 12 SYN_SENT 86 TIME_WAIT ################################## FIN_WAIT1 -- was high , but ESTABLISHED and CLOSE_WAIT went normal The overall CPU usage and memory remained fine and CPU utilization was normal too A difference that I can tell with Nginx and httpd on the server is that Nginx is configured for http/2 while httpd doesn't have http2 module enabled The following is how the listen is configured for the default server in Nginx server { listen x.x.x.x:80 default_server backlog=16384 reuseport deferred; server { listen x.x.x.x:443 default_server ssl backlog=16384 reuseport deferred; the individual vhosts have http2 also enabled Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282639#msg-282639 From nginx-forum at forum.nginx.org Thu Jan 10 21:29:00 2019 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 10 Jan 2019 16:29:00 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <4feedf638721bbd68e2ec0d373979bd9.NginxMailingListEnglish@forum.nginx.org> Try this; worker_processes 2; worker_rlimit_nofile 32767; thread_pool iopool threads=16 max_queue=32767; Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282640#msg-282640 From peter_booth at me.com Thu Jan 10 21:32:13 2019 From: peter_booth at me.com (Peter Booth) Date: Thu, 10 Jan 2019 16:32:13 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <02A55621-2138-48F8-AE56-8CA4C44B4C74@me.com> Your web server logs should have the key to solving this. Do you know what url was being requested? Do the URLs look valid? Are there requests all for the same resource? Are the requests coming from a single IP range? Are the requests all coming with the same user-agent? Does the time this started mean anything? Is it static content? Was the user-agent the same or different? You say that this is proxied content. Is it something you can microcache? I once saw a DDOS attack on a retail site that did business in most countries but not China, for an expensive dynamic page. We were using nginx to cache the page but the bots were requesting a URL that had an additional cache-busting parameter. The source IPs were all from Shanghai so we knew they were problematic. I?ve seen problems like the one you describe caused by many different things- redirect loops, CDN configuration problems, website misconfigurations that confuse google or bing, toxic test agents, badly designed webpages, - but the bottom line is that if you understand the patterns in your web server logs then you will know what?s happening If the requests Sent from my iPhone > On Jan 10, 2019, at 1:04 PM, Anoop Alias wrote: > > The issue was identified to be an enormous number of http request ( attack) to one of the hosted domains that was using cloudflare. The traffic is coming in from cloudflare and this was causing nginx to be exhausted in terms of the TCP stack > > ######################################### > # netstat -tn|awk '{print $6}'|sort|uniq -c > 1 > 19922 CLOSE_WAIT > 2 CLOSING > 23528 ESTABLISHED > 17785 FIN_WAIT1 > 4 FIN_WAIT2 > 1 Foreign > 17 LAST_ACK > 904 SYN_RECV > 14 SYN_SENT > 142 TIME_WAIT > ############################################ > > Interestingly with the same attack, removing Nginx from the picture and exposing httpd cause the connections to be fine > > ############################################ > ]# netstat -tn|awk '{print $6}'|sort|uniq -c > 1 > 39 CLOSE_WAIT > 9 CLOSING > 664 ESTABLISHED > 13 FIN_WAIT1 > 48 FIN_WAIT2 > 1 Foreign > 24 LAST_ACK > 8 SYN_RECV > 12 SYN_SENT > 1137 TIME_WAIT > ############################################## > > Although the load is a bit high than usual. > > It looks like the TCP connections in the established state is somehow piling up with Nginx > > Number of established connections over time with nginx > ############## > 535 ESTABLISHED > 1195 ESTABLISHED > 23437 ESTABLISHED > 23490 ESTABLISHED > 23482 ESTABLISHED > 389 ESTABLISHED > ############## > > I think this could be a misconfiguration in Nginx?. Would be great if someone points out what is wrong with the config > > Thanks, > > >> On Thu, Jan 10, 2019 at 8:27 AM Anoop Alias wrote: >> Hi, >> >> Have had a really strange issue on a Nginx server configured as a reverse proxy wherein the server stops responding when the network connections in ESTABLISHED state and FIN_WAIT state in very high compared to normal working >> >> If you see the below network graph, at around 00:30 hours there is a big spike in network connections in FIN_WAIT state, to around 12000 from the normal value of ~20 >> >> https://i.imgur.com/wb6VMWo.png >> >> At this state, Nginx stops responding fully and does not work even after a full restart of the service. >> >> Switching off Nginx and bring Apache service to the frontend (removing the reverse proxy) fix this and the connections drop >> >> Nginx config & build setting >> ################################## >> nginx -V >> nginx version: nginx/1.15.8 >> built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC) >> built with LibreSSL 2.8.3 >> TLS SNI support enabled >> configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/etc/nginx/modules --with-pcre=./pcre-8.42 --with-pcre-jit --with-zlib=./zlib-1.2.11 --with-openssl=./libressl-2.8.3 --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error_log --http-log-path=/var/log/nginx/access_log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/dev/shm/client_temp --http-proxy-temp-path=/dev/shm/proxy_temp --http-fastcgi-temp-path=/dev/shm/fastcgi_temp --http-uwsgi-temp-path=/dev/shm/uwsgi_temp --http-scgi-temp-path=/dev/shm/scgi_temp --user=nobody --group=nobody --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-file-aio --with-threads --with-stream --with-stream_ssl_module --with-http_slice_module --with-compat --with-http_v2_module --add-dynamic-module=incubator-pagespeed-ngx-1.13.35.2-stable --add-dynamic-module=/usr/local/rvm/gems/ruby-2.5.3/gems/passenger-6.0.0/src/nginx_module --add-dynamic-module=ngx_brotli --add-dynamic-module=echo-nginx-module-0.61 --add-dynamic-module=headers-more-nginx-module-0.32 --add-dynamic-module=ngx_http_redis-0.3.8 --add-dynamic-module=redis2-nginx-module --add-dynamic-module=srcache-nginx-module-0.31 --add-dynamic-module=ngx_devel_kit-0.3.0 --add-dynamic-module=set-misc-nginx-module-0.31 --add-dynamic-module=ngx_http_geoip2_module --add-dynamic-module=testcookie-nginx-module --add-dynamic-module=ModSecurity-nginx --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -m64 -mtune=generic' --with-ld-opt=-Wl,-E >> >> ##################################### >> # worker_processes auto; #Set to auto for a powerful server >> worker_processes 1; >> worker_rlimit_nofile 69152; >> worker_shutdown_timeout 10s; >> # worker_cpu_affinity auto; >> timer_resolution 1s; >> thread_pool iopool threads=32 max_queue=65536; >> pcre_jit on; >> pid /var/run/nginx.pid; >> error_log /var/log/nginx/error_log; >> >> #Load Dynamic Modules >> include /etc/nginx/modules.d/*.load; >> >> >> events { >> worker_connections 20480; >> use epoll; >> multi_accept on; >> accept_mutex off; >> } >> >> lingering_close off; >> limit_req zone=FLOODVHOST burst=200; >> limit_req zone=FLOODPROTECT burst=200; >> limit_conn PERSERVER 60; >> client_header_timeout 5s; >> client_body_timeout 5s; >> send_timeout 5s; >> keepalive_timeout 0; >> http2_idle_timeout 20s; >> http2_recv_timeout 20s; >> >> >> aio threads=iopool; >> aio_write on; >> directio 64m; >> output_buffers 2 512k; >> >> tcp_nodelay on; >> >> types_hash_max_size 4096; >> server_tokens off; >> client_max_body_size 2048m; >> reset_timedout_connection on; >> >> #Proxy >> proxy_read_timeout 300; >> proxy_send_timeout 300; >> proxy_connect_timeout 30s; >> >> #FastCGI >> fastcgi_read_timeout 300; >> fastcgi_send_timeout 300; >> fastcgi_connect_timeout 30s; >> >> #Proxy Buffer >> proxy_buffering on; >> proxy_buffer_size 128k; >> proxy_buffers 8 128k; >> proxy_busy_buffers_size 256k; >> >> #FastCGI Buffer >> fastcgi_buffer_size 128k; >> fastcgi_buffers 8 128k; >> fastcgi_busy_buffers_size 256k; >> >> server_names_hash_max_size 2097152; >> server_names_hash_bucket_size 128; >> ###################################################### >> >> >> >> -- >> Anoop P Alias >> > > > -- > Anoop P Alias > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Jan 10 23:01:31 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 10 Jan 2019 23:01:31 +0000 Subject: share cookies between servers In-Reply-To: <7c4f60dd.75ec.168370a6e4d.Coremail.zn1314@126.com> References: <7c4f60dd.75ec.168370a6e4d.Coremail.zn1314@126.com> Message-ID: <20190110230131.7g7bvc3nq3gskrv3@daoine.org> On Thu, Jan 10, 2019 at 05:14:17PM +0800, David Ni wrote: Hi there, > I have one requirement right now,we are using nginx with ldap auth ... > my requirement is that whether datanode02.bddev.test.net datanode03.bddev.test.net can share cookies between each other, Read about http cookies, and the "domain" attribute/directive of them. If you decide that the benefits to you are worth more than the costs to you, then find whatever part of your system sets the cookies (creates the Set-Cookie: header), and change that to add a suitable "Domain=" string. That part of your system is probably not nginx-provided C-code. Good luck with it, f -- Francis Daly francis at daoine.org From francis at daoine.org Thu Jan 10 23:48:05 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 10 Jan 2019 23:48:05 +0000 Subject: Use sub-url to identify the different server In-Reply-To: <3b5c0b7004b9067a9d4356c602cf4aed.NginxMailingListEnglish@forum.nginx.org> References: <3b5c0b7004b9067a9d4356c602cf4aed.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190110234805.wkznq2k3sxahfj22@daoine.org> On Thu, Jan 10, 2019 at 02:54:14AM -0500, nevereturn01 wrote: Hi there, > I have 2 internal web hosts & 1 dedicate Nginx as reverse proxy, eg 10.1.1.1 > & 10.1.1.2 > > Now, I need to access the different hosts via sub-url. eg: > > 1. https://www.domain.com/site1 -----> https://10.1.1.1/ > https://www.domain.com/site1/ui -----> https://10.1.1.1/ui/ > 2. https://www.domain.com/site2 -----> https://10.1.1.2/ > https://www.domain.com/site2/ui -----> https://10.1.1.2/ui/ If you have the freedom to change the internal web hosts, such that (for example) the 10.1.1.1 content is all locally below the url //10.1.1.1/site1/ instead of //10.1.1.1/, then you will probably find it much easier to reverse-proxy them. location /site1 { proxy_pass https://10.1.1.1; } location /site2 { proxy_pass https://10.1.1.2; } is possibly all you would need in that case. If you can't do that, then the start of what you will need will be more like location /site1/ { proxy_pass https://10.1.1.1/; } location /site2/ { proxy_pass https://10.1.1.2/; } but you will have to take care that anything that is returned to the client that will be interpreted as a url, will be interpreted the way you want it to be. If the internal servers do not make that easy, it will probably not be easy for you. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Fri Jan 11 03:30:29 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Thu, 10 Jan 2019 22:30:29 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: <4feedf638721bbd68e2ec0d373979bd9.NginxMailingListEnglish@forum.nginx.org> References: <4feedf638721bbd68e2ec0d373979bd9.NginxMailingListEnglish@forum.nginx.org> Message-ID: My Current settings are higher except the worker_process worker_processes 1; worker_rlimit_nofile 69152; worker_shutdown_timeout 10s; thread_pool iopool threads=32 max_queue=65536; I think the issue is that nginx accumulate ESTABLISHED and CLOSE_WAIT and FIN_WAIT1 >From successive netstat -apn listing I see that it is the CLOSE_WAIT that is sky-rocketing first then eventually ESTABLISHED and FIN_WAIT1 The million dollar question is why Apache httpd is handling this situation of attack quite well on the same server while having Nginx as a reverse proxy hangs the web stack by TCP state exhaustion? The symptoms are similar to what is mentioned at https://blog.cloudflare.com/this-is-strictly-a-violation-of-the-tcp-specification/ Only thing is that I don't know what must be changed in the config etc to fix this problem in nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282645#msg-282645 From nginx-forum at forum.nginx.org Fri Jan 11 04:02:36 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Thu, 10 Jan 2019 23:02:36 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: Can multi_accept be on cause this? I have now set multi_accep to off and set up the Nginx again as a reverse proxy. The attack is not ongoing now, so can't tell immediately if that setting helps/not Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282646#msg-282646 From peter_booth at me.com Fri Jan 11 04:06:44 2019 From: peter_booth at me.com (Peter Booth) Date: Thu, 10 Jan 2019 23:06:44 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: <4feedf638721bbd68e2ec0d373979bd9.NginxMailingListEnglish@forum.nginx.org> Message-ID: How do you know that this is an attack and not ?normal traffic?? How are these requests different from regular requests? What do the weblogs say about the ?attack requests?" > On 10 Jan 2019, at 10:30 PM, gnusys wrote: > > My Current settings are higher except the worker_process > > worker_processes 1; > worker_rlimit_nofile 69152; > worker_shutdown_timeout 10s; > thread_pool iopool threads=32 max_queue=65536; > > > I think the issue is that nginx accumulate ESTABLISHED and CLOSE_WAIT and > FIN_WAIT1 > > From successive netstat -apn listing I see that it is the CLOSE_WAIT that is > sky-rocketing first > > then eventually ESTABLISHED and FIN_WAIT1 > > > The million dollar question is why Apache httpd is handling this situation > of attack quite well on the same server while having Nginx as a reverse > proxy hangs the web stack by TCP state exhaustion? > > The symptoms are similar to what is mentioned at > https://blog.cloudflare.com/this-is-strictly-a-violation-of-the-tcp-specification/ > > Only thing is that I don't know what must be changed in the config etc to > fix this problem in nginx > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282645#msg-282645 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From lists at lazygranch.com Fri Jan 11 04:07:02 2019 From: lists at lazygranch.com (lists at lazygranch.com) Date: Thu, 10 Jan 2019 20:07:02 -0800 Subject: =?UTF-8?Q?Re=3A_I_need_my_=E2=80=9Cbad_user_agent=E2=80=9D_map_not_to_bloc?= =?UTF-8?Q?k_my_rss_xml_file?= In-Reply-To: <20190110085033.z5uctkk5hnthebkn@daoine.org> References: <20190108193044.6c1a26a7.lists@lazygranch.com> <20190109082005.734l7ulotv2im6vp@daoine.org> <20190109181404.72ee0f9f.lists@lazygranch.com> <20190110085033.z5uctkk5hnthebkn@daoine.org> Message-ID: <20190110200702.04041999.lists@lazygranch.com> On Thu, 10 Jan 2019 08:50:33 +0000 Francis Daly wrote: > On Wed, Jan 09, 2019 at 06:14:04PM -0800, lists at lazygranch.com wrote: > > Hi there, > > > location / { > > if ($badagent) { return 403; } > > } > > location = /feeds { > > try_files $uri $uri.xml $uri/ ; > > } > > > The "=" should force an exact match, but the badagent map is > > checked. > > > > Absolutely the badagent check under location / is being triggered. > > Everything works if I comment out the check. > > > > The URL to request the XML file is domain.com/feeds/file.xml . > > If the request is /feeds/file.xml, that will not exactly match > "/feeds". > > location = /feeds/file.xml {} > > should serve the file feeds/file.xml below the document root. > > Or, if you want to handle all requests that start with /feeds/ in a > similar way, > > location /feeds/ {} > > or > > location ^~ /feeds/ {} > > should do that. (The two are different if you have regex locations in > the config.) > > f There is a certain irony in that I first started out with location /feeds/ {} BUT I had the extra root statement. This appears to work. Thanks. Here are a few tests: claws rssyl plugin 200 xxx.58.22.151 - - [11/Jan/2019:03:48:25 +0000] "GET /feeds/feed.xml HTTP/1.1" 3614 "-" "libfeed 0.1" "-" akragator 200 xxx.58.22.151 - - [11/Jan/2019:03:50:39 +0000] "GET /feeds/feed.xml HTTP/1.1" 3614 "-" "-" "-" liferea 200 xxx.58.22.151 - - [11/Jan/2019:03:51:40 +0000] "GET /feeds/feed.xml HTTP/1.1" 3614 "-" "Liferea/1.10.19 (Linux; en_US.UTF-8; http://liferea.sf.net/) AppleWebKit (KHTML, like Gecko)" "-" read on android 304 myvpnip - - [11/Jan/2019:03:55:44 +0000] "GET /feeds/feed.xml HTTP/1.1" 0 "-" "Mozilla/5.0 (Linux; Android 8.1.0; myphone) AppleWebKit/537.36 (KHTML, like Gecko) Version/4.0 Chrome/71.0.3578.99 Mobile Safari/537.36" "-" feedbucket (a web based reader) They use a proxy 200 162.246.57.122 - - [11/Jan/2019:04:01:06 +0000] "GET /feeds/feed.xml HTTP/1.1" 3614 "-" "FeedBucket/1.0 \x5C(+http://www.feedbucket.com\x5C)" "-" From nginx-forum at forum.nginx.org Fri Jan 11 04:19:10 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Thu, 10 Jan 2019 23:19:10 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <21b4ba3c1c8b0a3327394c270754d1a4.NginxMailingListEnglish@forum.nginx.org> The domain is proxied over cloudflare and the access log shows a large number of requests to the website from the cloudflare servers 121115 162.158.88.4 121472 162.158.89.99 121697 162.158.90.176 122265 162.158.91.97 122969 162.158.93.113 125020 162.158.91.103 126132 162.158.90.194 128913 162.158.91.25 128980 162.158.93.89 the requests were all GET / and the rate at which it is done mostly is extremely high pointing to a Layer 7 attack We cant block the cloudflare IP's on the server as other sites (its a shared hosting server) may be using Cloudflare . At the moment the target IP on the server is blocked at the network level.Luckily the domain was using a dedicated IP As I already said, Apache handles this pretty well , the only small issue I see is the server load getting a bit above normal and the Apache scoreboard getting filled up, but with Nginx the entire webstack freeze with the CLOSE_WAIT state and ESTABLISHED state extremely high and we can bring back things to normal only after disabling Nginx . Once Nginx is disabled, the CLOSE_WAIT and ESTABLISHED states clear off immediately too Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282649#msg-282649 From nginx-forum at forum.nginx.org Fri Jan 11 04:32:47 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Thu, 10 Jan 2019 23:32:47 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <329286adff0c14541acd8dda3759fdb4.NginxMailingListEnglish@forum.nginx.org> The TCP state graph for the situation is: https://i.imgur.com/USECPtc.png You can see at 16:55 the FIN_WAIT1 ,CLOSE_WAIT and ESTABLISHED takes a steep climb, At this point Nginx hangs as the server has a script that checks stub status and this doesn't finish. The server itself and all other services like sshd works pretty fine ,except Nginx At 17:06 you can see the connection states drop steeply too. this is the point where Nginx is stopped and Apache takes over as the webserver The ESTABLISHED and CLOSE_WAIT drop fully and the FIN_WAIT1 reduces slowly and finally attain normalcy at 17:15 Apache continued working fine with the same amount of external traffic and same firewall setup Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282650#msg-282650 From peter_booth at me.com Fri Jan 11 04:35:10 2019 From: peter_booth at me.com (Peter Booth) Date: Thu, 10 Jan 2019 23:35:10 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: <21b4ba3c1c8b0a3327394c270754d1a4.NginxMailingListEnglish@forum.nginx.org> References: <21b4ba3c1c8b0a3327394c270754d1a4.NginxMailingListEnglish@forum.nginx.org> Message-ID: 1. What does GET / return? 2. You said that nginx was configured as a reverse proxy. Is / proxied to a back-end? 3. Does GET / return the same content to different users? 4. Is the user-agent identical for these suspicious requests? Sent from my iPhone > On Jan 10, 2019, at 11:19 PM, gnusys wrote: > > The domain is proxied over cloudflare and the access log shows a large > number of requests to the website from the cloudflare servers > > 121115 162.158.88.4 > 121472 162.158.89.99 > 121697 162.158.90.176 > 122265 162.158.91.97 > 122969 162.158.93.113 > 125020 162.158.91.103 > 126132 162.158.90.194 > 128913 162.158.91.25 > 128980 162.158.93.89 > > the requests were all GET / and the rate at which it is done mostly is > extremely high pointing to a Layer 7 attack > > We cant block the cloudflare IP's on the server as other sites (its a shared > hosting server) may be using Cloudflare . At the moment the target IP on the > server is blocked at the network level.Luckily the domain was using a > dedicated IP > > As I already said, Apache handles this pretty well , the only small issue I > see is the server load getting a bit above normal and the Apache scoreboard > getting filled up, but with Nginx the entire webstack freeze with the > CLOSE_WAIT state and ESTABLISHED state extremely high and we can bring back > things to normal only after disabling Nginx . Once Nginx is disabled, the > CLOSE_WAIT and ESTABLISHED states clear off immediately too > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282649#msg-282649 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From peter_booth at me.com Fri Jan 11 04:40:56 2019 From: peter_booth at me.com (Peter Booth) Date: Thu, 10 Jan 2019 23:40:56 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: <329286adff0c14541acd8dda3759fdb4.NginxMailingListEnglish@forum.nginx.org> References: <329286adff0c14541acd8dda3759fdb4.NginxMailingListEnglish@forum.nginx.org> Message-ID: <79C7E2FC-1A72-4360-9853-654F2EF254B5@me.com> Is your nginx/Apache site visible on the internet without any authentication? If so, I recommend that you access your site directly, not through cloud flare with redbot.org, which is the best HTTP debugger ever, for both the nginx and Apache versions of the site and see how they compare. Why isn?t your root page / being served from your CDN edge? Is the cloudfkare configured to not cache /? If so, why? Sent from my iPhone > On Jan 10, 2019, at 11:32 PM, gnusys wrote: > > The TCP state graph for the situation is: > > https://i.imgur.com/USECPtc.png > > You can see at 16:55 the FIN_WAIT1 ,CLOSE_WAIT and ESTABLISHED takes a steep > climb, At this point Nginx hangs as the server has a script that checks stub > status and this doesn't finish. The server itself and all other services > like sshd works pretty fine ,except Nginx > > At 17:06 you can see the connection states drop steeply too. this is the > point where Nginx is stopped and Apache takes over as the webserver > The ESTABLISHED and CLOSE_WAIT drop fully and the FIN_WAIT1 reduces slowly > and finally attain normalcy at 17:15 > > Apache continued working fine with the same amount of external traffic and > same firewall setup > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282650#msg-282650 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jan 11 04:56:46 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Thu, 10 Jan 2019 23:56:46 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: <79C7E2FC-1A72-4360-9853-654F2EF254B5@me.com> References: <79C7E2FC-1A72-4360-9853-654F2EF254B5@me.com> Message-ID: <90143190cbfc2499a18ea6edfd55198a.NginxMailingListEnglish@forum.nginx.org> Its a shared server and I am unable to modify the domains CloudFlare/DNS settings. As said the question mostly is why Nginx is freezing for the same setup and traffic while Apache handles it just fine. If its an issue with the Nginx setting, what should I change? or is it a bug in Nginx? Nginx was used as a reverse proxy on the server just to ease off situations like this and the server use limit_conn and limit_requests etc setting and low timeouts, but the TCP state exhaustion is making the server unusable Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282653#msg-282653 From nginx-forum at forum.nginx.org Fri Jan 11 10:12:04 2019 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 11 Jan 2019 05:12:04 -0500 Subject: .service ExecStartPre in example Message-ID: <294c32de48323c5400c017093a56ce7b.NginxMailingListEnglish@forum.nginx.org> What's the purpose of testing the configuration file in the systemd example? Just starting the server seems simpler.. and the test isn't run prior to a restart request. ExecStartPre=/usr/sbin/nginx -t https://www.nginx.com/resources/wiki/start/topics/examples/systemd/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282654,282654#msg-282654 From lucas at lucasrolff.com Fri Jan 11 10:22:28 2019 From: lucas at lucasrolff.com (Lucas Rolff) Date: Fri, 11 Jan 2019 10:22:28 +0000 Subject: .service ExecStartPre in example In-Reply-To: <294c32de48323c5400c017093a56ce7b.NginxMailingListEnglish@forum.nginx.org> References: <294c32de48323c5400c017093a56ce7b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <975749C5-BF3C-42FB-9E2E-9228918C9A28@lucasrolff.com> There's nothing wrong with testing the configuration before starting the web server. The config is tested during restart, by the ExecStartPre. If you modify a config and you want to restart, you should execute nginx -t prior to restarting your service - but generally you'd want to use nginx -s reload as much as possible. ?On 11/01/2019, 11.12, "nginx on behalf of Olaf van der Spek" wrote: What's the purpose of testing the configuration file in the systemd example? Just starting the server seems simpler.. and the test isn't run prior to a restart request. ExecStartPre=/usr/sbin/nginx -t https://www.nginx.com/resources/wiki/start/topics/examples/systemd/ Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282654,282654#msg-282654 _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Jan 11 10:28:09 2019 From: nginx-forum at forum.nginx.org (Olaf van der Spek) Date: Fri, 11 Jan 2019 05:28:09 -0500 Subject: .service ExecStartPre in example In-Reply-To: <975749C5-BF3C-42FB-9E2E-9228918C9A28@lucasrolff.com> References: <975749C5-BF3C-42FB-9E2E-9228918C9A28@lucasrolff.com> Message-ID: Lucas Rolff Wrote: ------------------------------------------------------- > There's nothing wrong with testing the configuration before starting > the web server. Sure, but what effect does it have in the .service file? > The config is tested during restart, by the ExecStartPre. If you But only after the old instance is stopped.. so again, what's the purpose? > modify a config and you want to restart, you should execute nginx -t > prior to restarting your service - but generally you'd want to use > nginx -s reload as much as possible. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282654,282656#msg-282656 From leonard at pproj.de Fri Jan 11 10:44:30 2019 From: leonard at pproj.de (Leonard) Date: Fri, 11 Jan 2019 11:44:30 +0100 Subject: Default TLS v1.3 support in nginx 1.13.0+ Pre-Built packages Message-ID: <8fb9cd30-5efc-8abe-778c-1aa6e01979a3@pproj.de> Hello, I would be glad if nginx 1.13.0+ were to be compiled by default with openssl 1.1.1, so that by default the TLS v1.3 support is activated. This concerns the Pre-Built packages in the nginx repository. I'm running Debian GNU/Linux 9.6 (stretch). Information on `nginx -V`: */nginx version: nginx/1.14.2 built by gcc 6.3.0 20170516 (Debian 6.3.0-18+deb9u1) built with OpenSSL 1.1.0f? 25 May 2017 (running with OpenSSL 1.1.0j? 20 Nov 2018) TLS SNI support enabled configure arguments: --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --pid-path=/var/run/nginx.pid --lock-path=/var/run/nginx.lock --http-client-body-temp-path=/var/cache/nginx/client_temp --http-proxy-temp-path=/var/cache/nginx/proxy_temp --http-fastcgi-temp-path=/var/cache/nginx/fastcgi_temp --http-uwsgi-temp-path=/var/cache/nginx/uwsgi_temp --http-scgi-temp-path=/var/cache/nginx/scgi_temp --user=nginx --group=nginx --with-compat --with-file-aio --with-threads --with-http_addition_module --with-http_auth_request_module --with-http_dav_module --with-http_flv_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_mp4_module --with-http_random_index_module --with-http_realip_module --with-http_secure_link_module --with-http_slice_module --with-http_ssl_module --with-http_stub_status_module --with-http_sub_module --with-http_v2_module --with-mail --with-mail_ssl_module --with-stream --with-stream_realip_module --with-stream_ssl_module --with-stream_ssl_preread_module --with-cc-opt='-g -O2 -fdebug-prefix-map=/data/builder/debuild/nginx-1.14.2/debian/debuild-base/nginx-1.14.2=. -specs=/usr/share/dpkg/no-pie-compile.specs -fstack-protector-strong -Wformat -Werror=format-security -Wp,-D_FORTIFY_SOURCE=2 -fPIC' --with-ld-opt='-specs=/usr/share/dpkg/no-pie-link.specs -Wl,-z,relro -Wl,-z,now -Wl,--as-needed -pie'/* Repository used: */deb http://nginx.org/packages/debian/ stretch nginx deb-src http://nginx.org/packages/debian/ stretch nginx/* With best Regards Leonard Mustafa From zn1314 at 126.com Fri Jan 11 11:59:47 2019 From: zn1314 at 126.com (David Ni) Date: Fri, 11 Jan 2019 19:59:47 +0800 (CST) Subject: share cookies between servers In-Reply-To: <20190110230131.7g7bvc3nq3gskrv3@daoine.org> References: <7c4f60dd.75ec.168370a6e4d.Coremail.zn1314@126.com> <20190110230131.7g7bvc3nq3gskrv3@daoine.org> Message-ID: <674f1e07.9017.1683cc8522b.Coremail.zn1314@126.com> Hi Francis, Thanks very much for your point! I have read some info from internet based on your suggestion,for my understanding: when I login to one of the server datanode02.bddev.test.net,set cookie like this: server { listen 80; server_name datanode02.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; location / { proxy_pass http://datanode02:16010/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "myauth=true;Domain=.bddev.test.net;Path=/;Max-Age=31536000"; sub_filter_types *; sub_filter_once off; } } then in datanode03.bddev.test.net configuration: server { listen 80; server_name datanode03.bddev.test.net; error_log /var/log/nginx/error_for_bigdata.log info; access_log /var/log/nginx/http_access_for_bigdata.log main; #this will skip the ldap auth if ( $http_cookie ~* "myauth=true" ) { auth_ldap "Restricted Space"; auth_ldap_servers bigdataldap; } location / { proxy_pass http://datanode03:16010/; more_clear_headers "X-Frame-options"; add_header Set-Cookie "myauth=true;Domain=.bddev.test.net;Path=/;Max-Age=31536000"; sub_filter_types *; sub_filter_once off; } } am I correct? At 2019-01-11 07:01:31, "Francis Daly" wrote: >On Thu, Jan 10, 2019 at 05:14:17PM +0800, David Ni wrote: > >Hi there, > >> I have one requirement right now,we are using nginx with ldap auth >... >> my requirement is that whether datanode02.bddev.test.net datanode03.bddev.test.net >can share cookies between each other, > >Read about http cookies, and the "domain" attribute/directive of them. > >If you decide that the benefits to you are worth more than the costs to >you, then find whatever part of your system sets the cookies (creates >the Set-Cookie: header), and change that to add a suitable "Domain=" string. > >That part of your system is probably not nginx-provided C-code. > >Good luck with it, > > f >-- >Francis Daly francis at daoine.org >_______________________________________________ >nginx mailing list >nginx at nginx.org >http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 11 13:46:23 2019 From: nginx-forum at forum.nginx.org (pietdinu) Date: Fri, 11 Jan 2019 08:46:23 -0500 Subject: set up TLS/ DTLS terminations for TCP/UDP connections Message-ID: <80e9aa29d86f7f8b9d36310660d257c0.NginxMailingListEnglish@forum.nginx.org> Hi all, I need to set up TLS/ DTLS terminations for TCP/UDP connections. The Ingress should be the solution to expose our services via TCP/UDP connections with TLS/ DTLS terminations. I'm using nginx version: 1.15.3 Is it possible to set up TLS/DTLS terminations for TCP/UDP connections? Thanks in advance for your collaboration and help. Best regards, Pietro Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282659,282659#msg-282659 From maxim at nginx.com Fri Jan 11 14:02:33 2019 From: maxim at nginx.com (Maxim Konovalov) Date: Fri, 11 Jan 2019 17:02:33 +0300 Subject: set up TLS/ DTLS terminations for TCP/UDP connections In-Reply-To: <80e9aa29d86f7f8b9d36310660d257c0.NginxMailingListEnglish@forum.nginx.org> References: <80e9aa29d86f7f8b9d36310660d257c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <5ac0ebf0-16fa-d5a3-e6f6-9bcd42912e6b@nginx.com> Hi Pietro, On 11/01/2019 16:46, pietdinu wrote: > Hi all, > > I need to set up TLS/ DTLS terminations for TCP/UDP connections. > The Ingress should be the solution to expose our services via TCP/UDP > connections with TLS/ DTLS terminations. > I'm using nginx version: 1.15.3 > > Is it possible to set up TLS/DTLS terminations for TCP/UDP connections? > It is possible to do TLS termination for TCP traffic. You can find more information on this topic here: https://docs.nginx.com/nginx/admin-guide/security-controls/terminating-ssl-tcp/ The documentation is for nginx-plus but there is no difference here with nginx. For UDP the situation is cumbersome. We had a experimental patch for that while ago http://nginx.org/patches/dtls/ but failed to found any real use cases therefore we stopped maintain it. The second patch should work with nginx-1.13.9 though. We'll be grateful for more information about your specific usage, brief overview, what kind of backends you use etc. Thanks, Maxim -- Maxim Konovalov From nginx-forum at forum.nginx.org Fri Jan 11 16:35:38 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Fri, 11 Jan 2019 11:35:38 -0500 Subject: Location Message-ID: <72b419f589819c41397d90f37ad202d5.NginxMailingListEnglish@forum.nginx.org> Hi All I am trying to redirect to an internal server behind nginx that I really have no permissions to change. When you browse to the site normally at https://www.devapp.com for example it redirects to https://www.devapp.com/webaccess and it comes up fine. When I put in the nginx entries for it below and try to browse to it through nxinx such as https://www.mysite.com/webaccess the site comes up all garbled and missing some images. When I checked the structure of the site, I am noticing that the images directory is above the webaccess directory. Any ideas on how I can get the site to display correctly through nginx? I hope I supplied clear enough information. Thanks. upstream devapp { server 192.168.1.18:443; } location /webaccess/ { proxy_pass https://devapp; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282663,282663#msg-282663 From mailinglist at unix-solution.de Fri Jan 11 18:09:52 2019 From: mailinglist at unix-solution.de (basti) Date: Fri, 11 Jan 2019 19:09:52 +0100 Subject: Location In-Reply-To: <72b419f589819c41397d90f37ad202d5.NginxMailingListEnglish@forum.nginx.org> References: <72b419f589819c41397d90f37ad202d5.NginxMailingListEnglish@forum.nginx.org> Message-ID: <309d003e-390e-a5c8-2b85-0dda4e6e5183@unix-solution.de> On 11.01.19 17:35, petrosetta wrote: > Hi All > I am trying to redirect to an internal server behind nginx that I really > have no permissions to change. When you browse to the site normally at > https://www.devapp.com for example it redirects to > https://www.devapp.com/webaccess and it comes up fine. When I put in the > nginx entries for it below and try to browse to it through nxinx such as > https://www.mysite.com/webaccess the site comes up all garbled and missing > some images. When I checked the structure of the site, I am noticing that > the images directory is above the webaccess directory. Any ideas on how I > can get the site to display correctly through nginx? I hope I supplied clear > enough information. Thanks. > > upstream devapp { > server 192.168.1.18:443; > } > > location /webaccess/ { > proxy_pass https://devapp; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > add_header X-Frame-Options SAMEORIGIN; > add_header X-Content-Type-Options nosniff; > add_header X-XSS-Protection "1; mode=block"; > add_header Strict-Transport-Security > "max-age=31536000;includeSubDomains" always; > } Hello, https://www.devapp.com redirect to https://www.mysite.com/webaccess right? Please show us the complete server block and an "image" URL for example. As I understand it in the right way you must also redirect / to the proxy. From nginx-forum at forum.nginx.org Fri Jan 11 18:51:54 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Fri, 11 Jan 2019 13:51:54 -0500 Subject: Location In-Reply-To: <309d003e-390e-a5c8-2b85-0dda4e6e5183@unix-solution.de> References: <309d003e-390e-a5c8-2b85-0dda4e6e5183@unix-solution.de> Message-ID: HI Thanks so much for replying. Below is the block and upstream entry. Also, let's say without NGINX I bring up the site at https://mysite.domain.com/webaccess, when I click on an image, the url is https://mysite.domain.com/name_of_image. upstream devapp { server 192.168.1.18:443; } server { listen 443 ssl http2 default_server; server_tokens off; more_clear_headers Server; server_name www.mydomain.com; ssl on; ssl_certificate ssl/certificate.crt; ssl_certificate_key ssl/www.mydomain.com.key; ssl_dhparam ssl/dhparams.pem; ssl_ecdh_curve secp384r1; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate ssl/certificate.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 10s; ssl_protocols TLSv1.3 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_session_cache shared:SSL:1m; ssl_session_timeout 1h; ssl_session_tickets off; add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always; access_log /var/log/nginx/access.log main; log_not_found on; location /webaccess/ { proxy_pass https://devapp; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282663,282665#msg-282665 From mailinglist at unix-solution.de Fri Jan 11 19:20:23 2019 From: mailinglist at unix-solution.de (basti) Date: Fri, 11 Jan 2019 20:20:23 +0100 Subject: Location In-Reply-To: References: <309d003e-390e-a5c8-2b85-0dda4e6e5183@unix-solution.de> Message-ID: <509b7905-9817-ff12-bc30-07c38eb82882@unix-solution.de> On 11.01.19 19:51, petrosetta wrote: > HI > Thanks so much for replying. Below is the block and upstream entry. Also, > let's say without NGINX I bring up the site at > https://mysite.domain.com/webaccess, when I click on an image, the url is > https://mysite.domain.com/name_of_image. > > upstream devapp { > server 192.168.1.18:443; > } > > location /webaccess/ { > proxy_pass https://devapp; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > add_header X-Frame-Options SAMEORIGIN; > add_header X-Content-Type-Options nosniff; > add_header X-XSS-Protection "1; mode=block"; > add_header Strict-Transport-Security > "max-age=31536000;includeSubDomains" always; > } > } > As I write before you must also redirect /. For example something like should work: location / { # next line is optional depend on your app rewrite /webaccess/(.*)$ /webaccess/$1 break; proxy_pass https://devapp; ... } If you have access to server logs that host mysite.domain.com so have a look on it what's wrong if you have sill problems. From nginx-forum at forum.nginx.org Fri Jan 11 19:44:17 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Fri, 11 Jan 2019 14:44:17 -0500 Subject: Location In-Reply-To: <509b7905-9817-ff12-bc30-07c38eb82882@unix-solution.de> References: <509b7905-9817-ff12-bc30-07c38eb82882@unix-solution.de> Message-ID: It works perfectly. Thanks very much. If you could bear with me a little though,What if I wanted to also put the prod web site behind nginx, I can't use more than one root location so how could that be done. The only difference with the name is prod is prodapp instead of devapp. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282663,282668#msg-282668 From mailinglist at unix-solution.de Fri Jan 11 20:23:51 2019 From: mailinglist at unix-solution.de (basti) Date: Fri, 11 Jan 2019 21:23:51 +0100 Subject: Location In-Reply-To: References: <509b7905-9817-ff12-bc30-07c38eb82882@unix-solution.de> Message-ID: <15c58d1c-e2f1-3cd3-a95a-8ef9f1552c92@unix-solution.de> On 11.01.19 20:44, petrosetta wrote: > It works perfectly. Thanks very much. If you could bear with me a little > though,What if I wanted to also put the prod web site behind nginx, I can't > use more than one root location so how could that be done. The only > difference with the name is prod is prodapp instead of devapp. > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282663,282668#msg-282668 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > Thats depend on what you wont. you can do something like: www.example.com -> redirect to www.example.com/stage -> (proxy) redirect to production www.example.com/testing -> (proxy) redirect to dev or www.example.com -> (proxy) redirect to production dev.example.com -> (proxy) redirect to dev or whatever. An example would be better. :-) From nginx-forum at forum.nginx.org Fri Jan 11 21:04:33 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Fri, 11 Jan 2019 16:04:33 -0500 Subject: Location In-Reply-To: <15c58d1c-e2f1-3cd3-a95a-8ef9f1552c92@unix-solution.de> References: <15c58d1c-e2f1-3cd3-a95a-8ef9f1552c92@unix-solution.de> Message-ID: <091f8832bae4cd49e7c99d9bb960d787.NginxMailingListEnglish@forum.nginx.org> :) Understood. Well Ideally what I would like is: 1. An outside user hits https://www.mydomain.com he hits NGINX which redirects him to the internal site https://app.mydomain.com. 2. An outside user hits https://www.mydomain.com/webdev NGINX redirects him to the internal site https://devapp.mydomain.com Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282663,282672#msg-282672 From mdounin at mdounin.ru Sat Jan 12 02:43:07 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 12 Jan 2019 05:43:07 +0300 Subject: Default TLS v1.3 support in nginx 1.13.0+ Pre-Built packages In-Reply-To: <8fb9cd30-5efc-8abe-778c-1aa6e01979a3@pproj.de> References: <8fb9cd30-5efc-8abe-778c-1aa6e01979a3@pproj.de> Message-ID: <20190112024307.GA99070@mdounin.ru> Hello! On Fri, Jan 11, 2019 at 11:44:30AM +0100, Leonard via nginx wrote: > I would be glad if nginx 1.13.0+ were to be compiled by default with > openssl 1.1.1, so that by default the TLS v1.3 support is activated. > This concerns the Pre-Built packages in the nginx repository. > > I'm running Debian GNU/Linux 9.6 (stretch). Packages in the nginx repository are compiled with OpenSSL as shipped by default in the particular OS. As such, nginx on Debian 9 is compiled with OpenSSL 1.1.0f. OpenSSL 1.1.1 is currently available in Ubuntu 18.10, and corresponding nginx package as available from nginx.org are built with OpenSSL 1.1.1. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Sat Jan 12 03:30:13 2019 From: nginx-forum at forum.nginx.org (gnusys) Date: Fri, 11 Jan 2019 22:30:13 -0500 Subject: Nginx hang and do not respond with large number of network connection in FIN_WAIT state In-Reply-To: References: Message-ID: <40a8b4a19a4e0538a5148d4e6c545b26.NginxMailingListEnglish@forum.nginx.org> Set multi_accept off And the web service did not hang this time https://i.imgur.com/irbA5MO.png But the connections in CLOSE_WAIT and LAST_ACK got a spike Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282613,282675#msg-282675 From mailinglist at unix-solution.de Sat Jan 12 16:14:48 2019 From: mailinglist at unix-solution.de (basti) Date: Sat, 12 Jan 2019 17:14:48 +0100 Subject: Location In-Reply-To: <091f8832bae4cd49e7c99d9bb960d787.NginxMailingListEnglish@forum.nginx.org> References: <15c58d1c-e2f1-3cd3-a95a-8ef9f1552c92@unix-solution.de> <091f8832bae4cd49e7c99d9bb960d787.NginxMailingListEnglish@forum.nginx.org> Message-ID: <312ec56c-a219-bcee-7d32-d35e408417e5@unix-solution.de> On 11.01.19 22:04, petrosetta wrote: > 2. An outside user hits https://www.mydomain.com/webdev NGINX redirects him > to the internal site https://devapp.mydomain.com How does an image link look like in this case? Have you try to create an location /webdev and redirect this to devapp in your config? From francis at daoine.org Sat Jan 12 18:13:06 2019 From: francis at daoine.org (Francis Daly) Date: Sat, 12 Jan 2019 18:13:06 +0000 Subject: share cookies between servers In-Reply-To: <674f1e07.9017.1683cc8522b.Coremail.zn1314@126.com> References: <7c4f60dd.75ec.168370a6e4d.Coremail.zn1314@126.com> <20190110230131.7g7bvc3nq3gskrv3@daoine.org> <674f1e07.9017.1683cc8522b.Coremail.zn1314@126.com> Message-ID: <20190112181306.tqqgvtb4h5axnovg@daoine.org> On Fri, Jan 11, 2019 at 07:59:47PM +0800, David Ni wrote: Hi there, > auth_ldap "Restricted Space"; > auth_ldap_servers bigdataldap; > > location / { > proxy_pass http://datanode02:16010/; > add_header Set-Cookie "myauth=true;Domain=.bddev.test.net;Path=/;Max-Age=31536000"; > } > then in datanode03.bddev.test.net configuration: > if ( $http_cookie ~* "myauth=true" ) { > auth_ldap "Restricted Space"; > auth_ldap_servers bigdataldap; > } > location / { > proxy_pass http://datanode03:16010/; > add_header Set-Cookie "myauth=true;Domain=.bddev.test.net;Path=/;Max-Age=31536000"; > } > } > am I correct? I suspect "no". I don't know what your "normal" works-on-a-single-server auth_ldap system looks like. (http://nginx.org/r/auth_ldap suggests that it is not a default-provided module.) If your normal system involves you doing add_header Set-Cookie "myauth=true;Path=/;Max-Age=31536000"; then you are correct to add the "Domain=" bit here. But I would expect that the config in the two server{} blocks will be very similar. So either the "if" part should be in both servers, or in neither server. If your single-server config includes it, it should be included in the multi-server config too. f -- Francis Daly francis at daoine.org From pierre at couderc.eu Sun Jan 13 09:01:32 2019 From: pierre at couderc.eu (Pierre Couderc) Date: Sun, 13 Jan 2019 10:01:32 +0100 Subject: How to avoid nginx failure in case of bad certificate ? Message-ID: In case of bad certificate (certificate file missing for exemple), nginx fails to restart. Is there a way to avoid that ? There may be an error on one site without stopping all other correct sites. This occurs particularly in case we remove an old site, and make an error in configuration files : a few months later, let's encrypt tries to renew the certificate in the night and fails, then restarts nginx which fails because missing a no more used certificate, and the full site is stoped.... Thanks. PC From anoopalias01 at gmail.com Sun Jan 13 13:20:09 2019 From: anoopalias01 at gmail.com (Anoop Alias) Date: Sun, 13 Jan 2019 18:50:09 +0530 Subject: How to avoid nginx failure in case of bad certificate ? In-Reply-To: References: Message-ID: You can reload instead of restart and nginx would continue working on old config if the new one is invalid On Sun, Jan 13, 2019 at 2:31 PM Pierre Couderc wrote: > In case of bad certificate (certificate file missing for exemple), nginx > fails to restart. > > Is there a way to avoid that ? > > There may be an error on one site without stopping all other correct sites. > > This occurs particularly in case we remove an old site, and make an > error in configuration files : a few months later, let's encrypt tries > to renew the certificate in the night and fails, then restarts nginx > which fails because missing a no more used certificate, and the full > site is stoped.... > > Thanks. > > PC > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun Jan 13 14:13:23 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sun, 13 Jan 2019 19:13:23 +0500 Subject: Set browser cache to current month! Message-ID: Hi, We've a location like /school for which we want to set browser cache lifetime as 'current month'. Suppose /school is accessed on 10th January, its cache should be valid until end of January not 10th February and if accessed on 25th February its validity must be until the end of February month. Is it possible in Nginx ? Regards. Shahzaib -------------- next part -------------- An HTML attachment was scrubbed... URL: From richarddemeny at gmail.com Sun Jan 13 14:57:41 2019 From: richarddemeny at gmail.com (Richard Demeny) Date: Sun, 13 Jan 2019 14:57:41 +0000 Subject: Set browser cache to current month! In-Reply-To: References: Message-ID: browser cache = client-side cache. nginx cache = server-side cache. Set cache expiry flag ? On Sunday, January 13, 2019, shahzaib mushtaq wrote: > Hi, > > We've a location like /school for which we want to set browser cache > lifetime as 'current month'. Suppose /school is accessed on 10th January, > its cache should be valid until end of January not 10th February and if > accessed on 25th February its validity must be until the end of February > month. > > Is it possible in Nginx ? > > Regards. > Shahzaib > -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Sun Jan 13 15:12:56 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Sun, 13 Jan 2019 20:12:56 +0500 Subject: Set browser cache to current month! In-Reply-To: References: Message-ID: Currently I'm using these lines to configure http caching within my Nginx: location /schools { expires 2d; add_header Cache-Control public; ... } On Sun, Jan 13, 2019 at 7:57 PM Richard Demeny wrote: > browser cache = client-side cache. > > nginx cache = server-side cache. > > Set cache expiry flag ? > > On Sunday, January 13, 2019, shahzaib mushtaq > wrote: > >> Hi, >> >> We've a location like /school for which we want to set browser cache >> lifetime as 'current month'. Suppose /school is accessed on 10th January, >> its cache should be valid until end of January not 10th February and if >> accessed on 25th February its validity must be until the end of February >> month. >> >> Is it possible in Nginx ? >> >> Regards. >> Shahzaib >> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Sun Jan 13 22:20:50 2019 From: peter_booth at me.com (Peter Booth) Date: Sun, 13 Jan 2019 17:20:50 -0500 Subject: Set browser cache to current month! In-Reply-To: References: Message-ID: <0D811F04-4D55-4028-A141-54798B4821E1@me.com> If you use the openresty nginx distribution then you can write a few lines of Lua to implement your custom logic. Sent from my iPhone > On Jan 13, 2019, at 9:13 AM, shahzaib mushtaq wrote: > > Hi, > > We've a location like /school for which we want to set browser cache lifetime as 'current month'. Suppose /school is accessed on 10th January, its cache should be valid until end of January not 10th February and if accessed on 25th February its validity must be until the end of February month. > > Is it possible in Nginx ? > > Regards. > Shahzaib > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Mon Jan 14 03:18:06 2019 From: nginx-forum at forum.nginx.org (nevereturn01) Date: Sun, 13 Jan 2019 22:18:06 -0500 Subject: Use sub-url to identify the different server In-Reply-To: <20190110234805.wkznq2k3sxahfj22@daoine.org> References: <20190110234805.wkznq2k3sxahfj22@daoine.org> Message-ID: <95dde6612dc154b67cf61b4ed3b4ea37.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks for your reply. Since I'm not in the website develop team, I cannot let them change the url structure:( Now, I'm tring to use URL rewrite. I've tried the following: ============================== location /site1 { rewrite ^/site1/(.*) /$1 break; proxy_pass https://10.1.1.1; ============================== However, it didn't work and I got a Http 404 error. If URL rewrite can help in this scenario, is there anything wrong with my rewrite rule? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282615,282690#msg-282690 From pierre at couderc.eu Mon Jan 14 06:16:29 2019 From: pierre at couderc.eu (Pierre Couderc) Date: Mon, 14 Jan 2019 07:16:29 +0100 Subject: How to avoid nginx failure in case of bad certificate ? In-Reply-To: References: Message-ID: <67f0b56e-8d58-ccaf-bd14-b660e4a14f3d@couderc.eu> Thank you very much... So simple solution ! On 1/13/19 2:20 PM, Anoop Alias wrote: > You can reload instead of restart and nginx would continue working on > old config if the new one is invalid > > On Sun, Jan 13, 2019 at 2:31 PM Pierre Couderc > wrote: > > In case of bad certificate (certificate file missing for exemple), > nginx > fails to restart. > > Is there a way to avoid that ? > > There may be an error on one site without stopping all other > correct sites. > > This occurs particularly in case we remove an old site, and make an > error in configuration files : a few months later, let's encrypt > tries > to renew the certificate in the night and fails, then restarts nginx > which fails because missing a no more used certificate, and the full > site is stoped.... > > Thanks. > > PC > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > *Anoop P Alias* > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From shahzaib.cb at gmail.com Mon Jan 14 06:32:44 2019 From: shahzaib.cb at gmail.com (shahzaib mushtaq) Date: Mon, 14 Jan 2019 11:32:44 +0500 Subject: Set browser cache to current month! In-Reply-To: <0D811F04-4D55-4028-A141-54798B4821E1@me.com> References: <0D811F04-4D55-4028-A141-54798B4821E1@me.com> Message-ID: Hi Peter, Thanks for help, can you direct me to some tutorial to help me do that ? I am new to Lua . On Mon, Jan 14, 2019 at 3:21 AM Peter Booth via nginx wrote: > If you use the openresty nginx distribution then you can write a few lines > of Lua to implement your custom logic. > > Sent from my iPhone > > > On Jan 13, 2019, at 9:13 AM, shahzaib mushtaq > wrote: > > > > Hi, > > > > We've a location like /school for which we want to set browser cache > lifetime as 'current month'. Suppose /school is accessed on 10th January, > its cache should be valid until end of January not 10th February and if > accessed on 25th February its validity must be until the end of February > month. > > > > Is it possible in Nginx ? > > > > Regards. > > Shahzaib > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Mon Jan 14 14:07:36 2019 From: francis at daoine.org (Francis Daly) Date: Mon, 14 Jan 2019 14:07:36 +0000 Subject: Use sub-url to identify the different server In-Reply-To: <95dde6612dc154b67cf61b4ed3b4ea37.NginxMailingListEnglish@forum.nginx.org> References: <20190110234805.wkznq2k3sxahfj22@daoine.org> <95dde6612dc154b67cf61b4ed3b4ea37.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190114140736.wuhys7fkatzco7ed@daoine.org> On Sun, Jan 13, 2019 at 10:18:06PM -0500, nevereturn01 wrote: Hi there, > Since I'm not in the website develop team, I cannot let them change the url > structure:( That's a shame. Whether is easy (or even possible) to reverse-proxy the "site1" content at a different part of the url hierarchy is almost entirely down to the "site1" content. > Now, I'm tring to use URL rewrite. > I've tried the following: > ============================== > location /site1 { > rewrite ^/site1/(.*) /$1 break; > proxy_pass https://10.1.1.1; > ============================== > However, it didn't work and I got a Http 404 error. What one request did you make of nginx? What request did nginx make of the upstream server? What response came back? It is usually useful to test using "curl" rather than a full web browser, because it hides less from you. > If URL rewrite can help in this scenario, is there anything wrong with my > rewrite rule? You got a 404. I had suggested config without rewrite. Does that one help at all? (The main difference, I think, between rewrite and no-rewrite configs here, is the effective proxy_redirect config that applies. If you ask nginx for /site1/dir, nginx should ask upstream for /dir, and upstream will probably return 301 with a redirect to /dir/. With your config, your browser will get a redirect to /dir/, to which nginx will probably return 404. With my config, your browser should get a redirect to /site1/dir/, which has a chance of working.) f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Jan 14 22:50:19 2019 From: nginx-forum at forum.nginx.org (cboyke) Date: Mon, 14 Jan 2019 17:50:19 -0500 Subject: Reverse proxy - variable Message-ID: I've seen this asked and answered many times - and I've tried many of the suggestions posted, but still not able to get this to work. I'm reverse-proxying to an internal IP inside an AWS VPC - eventually, I'd like to use a JavaScript function to give the correct IP, but for now I'm just hard-coding it in a variable. I've tried all of the below, and none work: server { listen 80; server_name myserver.com; location / { set $with_trailing_slash "http://172.31.17.123:8080/"; #proxy_pass http://172.31.17.123:8080/; #this works no problem. #proxy_pass $with_trailing_slash; # This gets to the upstream but gives a 404; #proxy_pass $with_trailing_slash$request_uri; # This spins forever set $no_trailing_slash "http://172.31.17.123:8080"; #proxy_pass $no_trailing_slash; #spins forever; #proxy_pass $no_trailing_slash$request_uri; #spins forever; set $just_ip 172.31.17.123; #proxy_pass http://$just_ip:8080/; #404; #proxy_pass http://$just_ip:8080/$request_uri; #spins forever; set $ip_in_quotes "172.31.17.123"; #proxy_pass http://$ip_in_quotes:8080/; #404; #proxy_pass http://$ip_in_quotes:8080; # spins forever; proxy_pass http://$iplookup:8080/; # spins forever; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282700,282700#msg-282700 From francis at daoine.org Tue Jan 15 21:58:35 2019 From: francis at daoine.org (Francis Daly) Date: Tue, 15 Jan 2019 21:58:35 +0000 Subject: Reverse proxy - variable In-Reply-To: References: Message-ID: <20190115215835.3jcnfsbmjk65aekw@daoine.org> On Mon, Jan 14, 2019 at 05:50:19PM -0500, cboyke wrote: Hi there, > I've seen this asked and answered many times - and I've tried many of the > suggestions posted, but still not able to get this to work. The documentation for proxy_pass is at http://nginx.org/r/proxy_pass That says how things are intended to work when a variable is used. > I'm reverse-proxying to an internal IP inside an AWS VPC - eventually, I'd > like to use a JavaScript function to give the correct IP, but for now I'm > just hard-coding it in a variable. > I've tried all of the below, and none work: Can you show the "curl" command that you use to make the request of nginx, and show the response? It is often useful to use curl instead of a full browser, because it is likely to show something other than "spins forever" -- it may fail to respond, or it may show a http 301 to the same url, or something else. > server { > listen 80; > server_name myserver.com; > location / { > set $with_trailing_slash "http://172.31.17.123:8080/"; > #proxy_pass http://172.31.17.123:8080/; #this works no > problem. > #proxy_pass $with_trailing_slash; # This gets to the > upstream but gives a 404; For what it's worth, proxy_pass http://127.0.0.1:8091; and set $ip 127.0.0.1:8091; proxy_pass http://$ip; work the same as each other for me, while proxy_pass http://127.0.0.1:8091/; and set $ip 127.0.0.1:8091; proxy_pass http://$ip/; do not work the same as each other -- the last bullet point in the documentation explains why that is. > #proxy_pass $with_trailing_slash$request_uri; # This spins > forever > set $no_trailing_slash "http://172.31.17.123:8080"; > #proxy_pass $no_trailing_slash; #spins forever; Do you see a difference between using "proxy_pass $no_trailing_slash;" and "proxy_pass http://172.31.17.123:8080"? f -- Francis Daly francis at daoine.org From anilsunkara at discover.com Tue Jan 15 22:54:26 2019 From: anilsunkara at discover.com (Anil Sunkara) Date: Tue, 15 Jan 2019 22:54:26 +0000 Subject: SignatureDoesNotMatch - S3 upload with KMS encryption fails through Nginx proxy end point Message-ID: Hi, When we try to transfer data from on-prem to S3 with KMS encryption through nginx proxy endpoint , it fails with error "An error occurred (SignatureDoesNotMatch) when calling the PutObject operation: The request signature calculated does not match the signature you provided. Check your key and signing method". Our Nginx version nginx version: nginx/1.14.0 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-36) (GCC Please advise right configuration we need to put in nginx.conf. Do we need any third party modules for aws v4 signature . Thanks Anil. S -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Jan 16 18:20:14 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Wed, 16 Jan 2019 13:20:14 -0500 Subject: 404 Error Message-ID: <79c4e11500986bb59b0d04293feeddc1.NginxMailingListEnglish@forum.nginx.org> Hi All I have NGINX set up to pass requests to an upstream server. The home page and several other pages comes up just fine. However, when the users requests some reports that are in a directory just above the root directory on the upstream server, we get a 404 error from NGINX. Below are the relevant entries. We can see the reports when we bring up the site on the upstream server itself. Is there any way to get NGINX to see the reports in that directory? upstream devserver { server 192.168.1.22:80; } location /dev/ { proxy_pass http://devserver; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_read_timeout 600s; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always; } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282713,282713#msg-282713 From francis at daoine.org Wed Jan 16 23:33:17 2019 From: francis at daoine.org (Francis Daly) Date: Wed, 16 Jan 2019 23:33:17 +0000 Subject: 404 Error In-Reply-To: <79c4e11500986bb59b0d04293feeddc1.NginxMailingListEnglish@forum.nginx.org> References: <79c4e11500986bb59b0d04293feeddc1.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190116233317.5xs2jq42neadz4rw@daoine.org> On Wed, Jan 16, 2019 at 01:20:14PM -0500, petrosetta wrote: Hi there, > I have NGINX set up to pass requests to an upstream server. The home page > and several other pages comes up just fine. However, when the users requests > some reports that are in a directory just above the root directory on the > upstream server, we get a 404 error from NGINX. Below are the relevant > entries. We can see the reports when we bring up the site on the upstream > server itself. Is there any way to get NGINX to see the reports in that > directory? You seem to be suggesting that you can successfully access http://devserver/dev/something by going through nginx, but you cannot successfully access http://devserver/something by going through nginx. Is that correct? If so, the answer is something like location / { proxy_pass http://devserver; } or maybe location ~ thing { proxy_pass http://devserver; } You need to decide what requests you want nginx to send to the upstream devserver, and configure location{}s accordingly. f -- Francis Daly francis at daoine.org From ottavio at campana.vi.it Thu Jan 17 05:58:26 2019 From: ottavio at campana.vi.it (Ottavio Campana) Date: Thu, 17 Jan 2019 06:58:26 +0100 Subject: Best way to terminate handling a request Message-ID: Hello, I am hacking the ngx_http_auth_request module ( https://www.nginx.com/resources/wiki/extending/examples/auth_request/ ) so that as soon as the URL is parsed, I transfer the connection fd to another processo through a AF_UNIX socket. Everything is done in the ngx_http_auth_request_handler function. I see the other process receiving the file descriptor, I must now tell nginx to stop processing the request, to close the socket and to do nothing. The response will be generate by the other process. Is invoking ngx_close_socket r->connection->fd) and returning NGX_ABORT the correct way of doing it? Thank you, Ottavio -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 17 16:48:52 2019 From: nginx-forum at forum.nginx.org (pietdinu) Date: Thu, 17 Jan 2019 11:48:52 -0500 Subject: =?UTF-8?Q?How_to_update_an_existing_=22nginx_load_balancer=E2=80=9D?= Message-ID: Hi All, I need to deploy my own microservice, exposing an UDP port, and I would to use an existing "nginx load balancer" to forward incoming traffic towards my POD. I need to update the existing "nginx load balancer" already used for TCP traffic. This is the nginx LoadBalancer to be updated by adding an UDP rule: ingress-nginx ingress-nginx LoadBalancer 10.107.131.229 80:30272/TCP,443:31308/TCP Could you please suggest me the needed configurations in microservice deployment in order to update also the existing "nginx LoadBalancer"? Thanks in advance for your collaboration and Best Regards, Pietro Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282733,282733#msg-282733 From roger at netskrt.io Thu Jan 17 19:38:52 2019 From: roger at netskrt.io (Roger Fischer) Date: Thu, 17 Jan 2019 11:38:52 -0800 Subject: Two cache files with same key Message-ID: <23F0AE7B-F186-48EE-855B-2F3819D9D27D@netskrt.io> Hello, I am using the nginx cache. I observed a MISS when I expected a HIT. When I look at the cache, I see the same page (file) twice in the cache, the original that I expected to HIT on, and the new one from the MISS (that should have been a HIT). The key, as viewed in the cache file, is exactly the same. What else, besides the key is considered in determining a hit or miss? Thanks? Roger From lucas at lucasrolff.com Thu Jan 17 20:12:37 2019 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 17 Jan 2019 20:12:37 +0000 Subject: Two cache files with same key In-Reply-To: <23F0AE7B-F186-48EE-855B-2F3819D9D27D@netskrt.io> References: <23F0AE7B-F186-48EE-855B-2F3819D9D27D@netskrt.io> Message-ID: What key do you see and what's your cache key configured as? ?On 17/01/2019, 20.39, "nginx on behalf of Roger Fischer" wrote: Hello, I am using the nginx cache. I observed a MISS when I expected a HIT. When I look at the cache, I see the same page (file) twice in the cache, the original that I expected to HIT on, and the new one from the MISS (that should have been a HIT). The key, as viewed in the cache file, is exactly the same. What else, besides the key is considered in determining a hit or miss? Thanks? Roger _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx From roger at netskrt.io Thu Jan 17 23:21:29 2019 From: roger at netskrt.io (Roger Fischer) Date: Thu, 17 Jan 2019 15:21:29 -0800 Subject: Two cache files with same key In-Reply-To: References: <23F0AE7B-F186-48EE-855B-2F3819D9D27D@netskrt.io> Message-ID: <6C86724E-EAF4-4800-AD22-A856BEC4959A@netskrt.io> Hello, the key config is: # cache key is last part of URI set $cachekey '$uri'; if ($uri ~ '([^/]+$)') { set $cachekey $1; } proxy_cache_key "$cachekey"; The URI includes query parameters: /dm/2$q7zVD5pG1k6QpMe6Nz7a8uZDx78~/de4c/5433/a0db/4f70-b9f7-413f3f47f38d/18f1e750-d25d-4ad1-a518-1adad04a9740_corrected.mpd?encoding=hex The key from the cache files are: c/2c/ce5e89ff8bbc50ae58040855232242cc KEY: 18f1e750-d25d-4ad1-a518-1adad04a9740_corrected.mpd d/59/a44f4cc7a098be369daac12f1aac759d KEY: 18f1e750-d25d-4ad1-a518-1adad04a9740_corrected.mpd Roger > On Jan 17, 2019, at 12:12 PM, Lucas Rolff wrote: > > What key do you see and what's your cache key configured as? > > ?On 17/01/2019, 20.39, "nginx on behalf of Roger Fischer" wrote: > > Hello, > > I am using the nginx cache. I observed a MISS when I expected a HIT. When I look at the cache, I see the same page (file) twice in the cache, the original that I expected to HIT on, and the new one from the MISS (that should have been a HIT). > > The key, as viewed in the cache file, is exactly the same. > > What else, besides the key is considered in determining a hit or miss? > > Thanks? > > Roger > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Jan 19 05:00:21 2019 From: nginx-forum at forum.nginx.org (mahesh2150) Date: Sat, 19 Jan 2019 00:00:21 -0500 Subject: keepalive_requests and keepalive_timeout for api server Message-ID: How to use this two fields in api server? keepalive_timeout 0 keepalive_requests Is keepalive timeout really needed for an api server? Can the server keep connection with the frontend server? What is the best configuration for nginx api server? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282749,282749#msg-282749 From nginx-forum at forum.nginx.org Mon Jan 21 03:13:05 2019 From: nginx-forum at forum.nginx.org (nevereturn01) Date: Sun, 20 Jan 2019 22:13:05 -0500 Subject: Use sub-url to identify the different server In-Reply-To: <20190114140736.wuhys7fkatzco7ed@daoine.org> References: <20190114140736.wuhys7fkatzco7ed@daoine.org> Message-ID: <67435a781771b65c1bf14fb746972267.NginxMailingListEnglish@forum.nginx.org> Hi Francis, Thanks for your reply. Since I'm a newbie to Nginx, I'm sorry that I don't quitely understand the questions. Now, we are running a small business. So we don't have any load-balance or fail-over deployment. So far, we have only 1 Nginx + 1 serverA + 1 serverB. As for the rewrite, I did some research and found that it seems that only URL rewrite can help in my scenario. Technically, do you think the URL rewrite rules is correct? Maybe there are some syntax errors? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282615,282757#msg-282757 From nginx-forum at forum.nginx.org Mon Jan 21 14:23:37 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Mon, 21 Jan 2019 09:23:37 -0500 Subject: OWIN Server Error Message-ID: Morning all I don;t know if anyone has ever had this problem bit I am attempting to put an OWIN Server running on a Windows 2012 Server with IIS. Whenever we try to log on going through the NGINX server, I get "Invalid login attempt. Verify that your username and password are correct." I am sure the username and password are correct. When I look in NGINX logs, I am seeing GET /WebAccess/fonts/glyphicons-halflings-regular.woff2 HTTP/2.0" 404 1245 https://mywebsite/Content/css?v=looooong string. My settings for this server in NGINX is the following: listen 443 ssl http2; server_tokens off; more_clear_headers Server; server_name devmachine.mydomain.com; ssl on; ssl_certificate ssl/devmachine/certificate.crt; ssl_certificate_key ssl/devmachine/private-key.pem; ssl_dhparam ssl/dhparams.pem; ssl_ecdh_curve secp384r1; ssl_stapling on; ssl_stapling_verify on; ssl_trusted_certificate ssl/devmachine/certificate-trusted.crt; resolver 8.8.8.8 8.8.4.4 valid=300s; resolver_timeout 10s; ssl_protocols TLSv1.3 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; ssl_session_cache shared:SSL:1m; ssl_session_timeout 1h; ssl_session_tickets off; add_header Strict-Transport-Security "max-age=31536000;includeSubDomains" always; access_log /var/log/nginx/access.log main; log_not_found on; location / { proxy_pass https://devserver; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; add_header X-Frame-Options SAMEORIGIN; add_header X-Content-Type-Options nosniff; add_header X-XSS-Protection "1; mode=block"; proxy_ignore_client_abort on; proxy_buffering off; proxy_read_timeout 3600s; proxy_send_timeout 3600s; if ($limit_bots = 1) { return 403; } } Has anyone been able to get NGINX working with OWIN? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282762,282762#msg-282762 From peter_booth at me.com Mon Jan 21 17:05:26 2019 From: peter_booth at me.com (Peter Booth) Date: Mon, 21 Jan 2019 12:05:26 -0500 Subject: OWIN Server Error In-Reply-To: References: Message-ID: Petrosetta, Question is your nginx server running on the same host as your owin / IIS server? With OWIN / IIS listening only on port 80 and nginx only on port 443? And both listening on the physical NIC (not localhost) and no firewall? It looks as though you are wanting to do SSL termination and HTTP/2 with nginx and proxy everything to OWIN/IIS - is that correct? If it were me, I?d try to bisect the problem: 1. first repeat the test request (ideally with curl ) as an HTTP 1.1 request 2. If that didn?t work, configure a second VirtualHost in nginx on port 8080 that has no SSL and request that with curl Also, your config suggests that your web server might be internet visible. If It is, I would suggest that you try access these test URLs, and also directly accessing your IIS using the redbot.org HTTP validator. Good luck, Peter > On 21 Jan 2019, at 9:23 AM, petrosetta wrote: > > Morning all > I don;t know if anyone has ever had this problem bit I am attempting to put > an OWIN Server running on a Windows 2012 Server with IIS. Whenever we try to > log on going through the NGINX server, I get "Invalid login attempt. Verify > that your username and password are correct." I am sure the username and > password are correct. When I look in NGINX logs, I am seeing GET > /WebAccess/fonts/glyphicons-halflings-regular.woff2 HTTP/2.0" 404 1245 > https://mywebsite/Content/css?v=looooong string. > > My settings for this server in NGINX is the following: > > listen 443 ssl http2; > server_tokens off; > more_clear_headers Server; > server_name devmachine.mydomain.com; > ssl on; > ssl_certificate ssl/devmachine/certificate.crt; > ssl_certificate_key ssl/devmachine/private-key.pem; > ssl_dhparam ssl/dhparams.pem; > ssl_ecdh_curve secp384r1; > ssl_stapling on; > ssl_stapling_verify on; > ssl_trusted_certificate ssl/devmachine/certificate-trusted.crt; > resolver 8.8.8.8 8.8.4.4 valid=300s; > resolver_timeout 10s; > ssl_protocols TLSv1.3 TLSv1.2; > ssl_prefer_server_ciphers on; > ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH'; > ssl_session_cache shared:SSL:1m; > ssl_session_timeout 1h; > ssl_session_tickets off; > add_header Strict-Transport-Security > "max-age=31536000;includeSubDomains" always; > access_log /var/log/nginx/access.log main; > log_not_found on; > > location / { > proxy_pass https://devserver; > proxy_set_header Host $host; > proxy_set_header X-Real-IP $remote_addr; > proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; > proxy_set_header X-Forwarded-Proto $scheme; > add_header X-Frame-Options SAMEORIGIN; > add_header X-Content-Type-Options nosniff; > add_header X-XSS-Protection "1; mode=block"; > proxy_ignore_client_abort on; > proxy_buffering off; > proxy_read_timeout 3600s; > proxy_send_timeout 3600s; > if ($limit_bots = 1) { > return 403; > } > } > > Has anyone been able to get NGINX working with OWIN? > > Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282762,282762#msg-282762 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Mon Jan 21 18:21:34 2019 From: nginx-forum at forum.nginx.org (petrosetta) Date: Mon, 21 Jan 2019 13:21:34 -0500 Subject: OWIN Server Error In-Reply-To: References: Message-ID: <6fb81a9daf266094e97ed78e5560f1bc.NginxMailingListEnglish@forum.nginx.org> Thanks so much for replying. OWIN is on a separate box from NGINX. Below is my traffic flow. Internet------->Sonicwall------------(forward port 443)-------------------------------------->NGINX---------->IIS with OWIN OWIN listens on ports 80 and 443 I believe. Our developers did the site. It has a legitimate certificate installed also. They both listen on the physical NIC yes. Also yes I want to do SSL termination and HTTP/2 with nginx and proxy everything to OWIN/IIS. When I expose the site on the internet it works fine but when I put it (logically) behind NGINX, I get the error. The site still comes up even when behind NGINX but the error happens when I try to log in. Keeps saying Invalid login attempt. Not sure how to use curl to log into a site but I will look it up. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282762,282766#msg-282766 From avd-84 at yandex.ru Mon Jan 21 20:59:25 2019 From: avd-84 at yandex.ru (=?utf-8?B?0JDQu9C10LrRgdC10Lk=?=) Date: Mon, 21 Jan 2019 23:59:25 +0300 Subject: SSl Errors Message-ID: <27506781548104365@sas2-a271c7163765.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 22 03:43:01 2019 From: nginx-forum at forum.nginx.org (Roar) Date: Mon, 21 Jan 2019 22:43:01 -0500 Subject: grpc keepalive does not effect, nginx will close connection by the minute Message-ID: <1558b3dadf3177d5ef0a71a9e1a70770.NginxMailingListEnglish@forum.nginx.org> Grpc server is sitting at my backends, I use nginx as the proxy to transfer http1.1 to http2(grpc) protocol, I set parameter like below: upstream ID_PUMPER { server 127.0.0.1:58548; } server { listen 8080 http2; grpc_read_timeout 120s; grpc_send_timeout 120s; grpc_socket_keepalive on; keepalive_timeout 100s; location /utoProto.idProduce.IdProduce { grpc_pass grpc://ID_PUMPER; } } It will establish connection between nginx and grpc server, but it can not hold the connection and the collection will be closed after one minute even though I set keepalive_timeout parameter, I found that nginx will send FIN packet to grpc server by monitoring the network packets through wireshark. How can I hold the connection but not close it by every minute? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282769,282769#msg-282769 From nginx-forum at forum.nginx.org Tue Jan 22 10:00:05 2019 From: nginx-forum at forum.nginx.org (Surjo) Date: Tue, 22 Jan 2019 05:00:05 -0500 Subject: Nginx + php-fpm on centos Message-ID: <2dae4879ca3bc03751100186c8782d85.NginxMailingListEnglish@forum.nginx.org> Please help me to setup nginx + php-fpm + wordpress on cwp fully. I?ve installed WordPress on my sub domain with Nginx + php-fpm on centos 7, setup rewrite function. But when creating new post, post saved as draft, post didn't update, when create post or update post its redirect to all post page, and many plugins not working properly, when make change and hit save button showing this error: There is and error on your site, please reload this page. Please help me... my sub-domain is blog.kholifa.com, and there are two configure: 1. etc/nginx/config.d/vhosts/blog.kholifa.com.conf & 2. etc/nginx/config.d/vhosts/blog.kholifa.com.ssl.conf already passing 1week, i'm already surfing many blog / forum, but i've failed to resolve my issue. Please please please help me. Advanced thanks? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282772,282772#msg-282772 From nginx-forum at forum.nginx.org Tue Jan 22 10:11:05 2019 From: nginx-forum at forum.nginx.org (Surjo) Date: Tue, 22 Jan 2019 05:11:05 -0500 Subject: Nginx + php-fpm on centos In-Reply-To: <2dae4879ca3bc03751100186c8782d85.NginxMailingListEnglish@forum.nginx.org> References: <2dae4879ca3bc03751100186c8782d85.NginxMailingListEnglish@forum.nginx.org> Message-ID: when save from plugins setting :: There was a problem with your action. Please try again or reload the page. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282772,282773#msg-282773 From pluknet at nginx.com Tue Jan 22 12:01:29 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 22 Jan 2019 15:01:29 +0300 Subject: grpc keepalive does not effect, nginx will close connection by the minute In-Reply-To: <1558b3dadf3177d5ef0a71a9e1a70770.NginxMailingListEnglish@forum.nginx.org> References: <1558b3dadf3177d5ef0a71a9e1a70770.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On 22 Jan 2019, at 06:43, Roar wrote: > > Grpc server is sitting at my backends, I use nginx as the proxy to transfer > http1.1 to http2(grpc) protocol, I set parameter like below: > > upstream ID_PUMPER { > server 127.0.0.1:58548; > } > > server { > listen 8080 http2; > grpc_read_timeout 120s; > grpc_send_timeout 120s; > grpc_socket_keepalive on; > keepalive_timeout 100s; > > location /utoProto.idProduce.IdProduce { > grpc_pass grpc://ID_PUMPER; > } > } > > It will establish connection between nginx and grpc server, but it can not > hold the connection and the collection will be closed after one minute even > though I set keepalive_timeout parameter To tune keepalive timeout to upstream connections, you need the keepalive_timeout directive of the upstream module, that is, specified in the upstream{} block. See for details: http://nginx.org/en/docs/http/ngx_http_upstream_module.html#keepalive_timeout -- Sergey Kandaurov From mdounin at mdounin.ru Tue Jan 22 13:59:42 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 22 Jan 2019 16:59:42 +0300 Subject: SSl Errors In-Reply-To: <27506781548104365@sas2-a271c7163765.qloud-c.yandex.net> References: <27506781548104365@sas2-a271c7163765.qloud-c.yandex.net> Message-ID: <20190122135942.GC1877@mdounin.ru> Hello! On Mon, Jan 21, 2019 at 11:59:25PM +0300, ??????? wrote: > I use nginx 1.13.6 as server for mutual tls auth with clients certs Note that 1.13.6 is a mainline version which is not supported since release of 1.13.7 at 21 Nov 2017. You may want to upgrade to a more recent version, e.g., latest mainline is 1.15.8. > During ab test I get errors ssl read failed(5) closing connection Error 5 is SSL_ERROR_SYSCALL suggests that futher information is available in errno, and ab does not try to test/log errno. You may want to use strace / ktrace / truss to find out which error actually happened. > In nginx log (debug mode) I get > > 2019/01/21 23:50:01 [debug] 26#26: *27497 http check ssl handshake > 2019/01/21 23:50:01 [debug] 26#26: *27497 http recv(): 1 > 2019/01/21 23:50:01 [debug] 26#26: *27497 https ssl handshake: 0x16 > 2019/01/21 23:50:01 [debug] 26#26: *27497 tcp_nodelay > 2019/01/21 23:50:01 [debug] 26#26: *27497 SSL server name: "meteotravel.ru" > 2019/01/21 23:50:01 [debug] 26#26: *27497 SSL_do_handshake: -1 > 2019/01/21 23:50:01 [debug] 26#26: *27497 SSL_get_error: 2 > 2019/01/21 23:50:01 [debug] 26#26: *27497 reusable connection: 0 > 2019/01/21 23:50:02 [debug] 26#26: *27497 SSL handshake handler: 0 > 2019/01/21 23:50:02 [debug] 26#26: *27497 SSL_do_handshake: -1 > 2019/01/21 23:50:02 [debug] 26#26: *27497 SSL_get_error: 5 > 2019/01/21 23:50:02 [info] 26#26: *27497 peer closed connection in SSL handshake while SSL handshaking, client: 10.244.5.0, server: 0.0.0.0:443 >From nginx point of view, the connection was closed by the client. The error as returned by OpenSSL is SSL_ERROR_SYSCALL, and errno is 0 so it is not logged. This indicate a clean TCP-level connection close by the other side. -- Maxim Dounin http://mdounin.ru/ From avd-84 at yandex.ru Tue Jan 22 19:58:55 2019 From: avd-84 at yandex.ru (=?utf-8?B?0JDQu9C10LrRgdC10Lk=?=) Date: Tue, 22 Jan 2019 22:58:55 +0300 Subject: SSl Errors In-Reply-To: <20190122135942.GC1877@mdounin.ru> References: <27506781548104365@sas2-a271c7163765.qloud-c.yandex.net> <20190122135942.GC1877@mdounin.ru> Message-ID: <248221548187135@iva4-24c534b4e3ac.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From karljohnson.it at gmail.com Tue Jan 22 21:25:08 2019 From: karljohnson.it at gmail.com (Karl Johnson) Date: Tue, 22 Jan 2019 16:25:08 -0500 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: <8a1bea5bdddcbebadf4fcbb0edf236a0.NginxMailingListEnglish@forum.nginx.org> References: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> <3f55ca9764b07d02fa15ec2ecc629cf1.NginxMailingListEnglish@forum.nginx.org> <8a1bea5bdddcbebadf4fcbb0edf236a0.NginxMailingListEnglish@forum.nginx.org> Message-ID: On Mon, Oct 1, 2018 at 4:49 AM anish10dec wrote: > In both the cases , either geoip2 or ip2location we will have to compile > Nginx to support . > > Currently we are using below two RPM's from Nginx Repository > (http://nginx.org/packages/mainline/centos/7/x86_64/RPMS/) > nginx-1.10.2-1.el7.ngx.x86_64 > nginx-module-geoip-1.10.2-1.el7.ngx.x86_64 > > Is the rpm module available or is there any plan to make it available. > > Hello, I'm building nginx RPM for CentOS 6 and 7 which now include ngx_http_geoip2_module. If this package can help and you want to test it on a new server, just add the yum repository and install it from the testing repo: el7 #> yum install https://repo.aerisnetwork.com/stable/centos/7/x86_64/aeris-release-1.0-4.el7.noarch.rpm el7 #> yum --enablerepo=aeris-testing install nginx-more You can find more information about this package on GitHub: https://github.com/karljohns0n/nginx-more Karl -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger at netskrt.io Wed Jan 23 00:48:23 2019 From: roger at netskrt.io (Roger Fischer) Date: Tue, 22 Jan 2019 16:48:23 -0800 Subject: Slow OPTIONS requests with reverse proxy cache Message-ID: <0B54441D-B814-404C-9807-3D16F1455CAF@netskrt.io> Hello, I have noticed that the response to OPTIONS requests via a reverse proxy cache are quite slow. The browser reports 600 ms or more of idle time until NGINX provides the response. I am using the community edition of NGINX, so I don?t have any timing for the upstream request. As I understand it, NGINX bypasses the cache for OPTIONs requests. Thus it should be straight proxying. A direct request (browser directly to origin) is fast, in the 10s of milliseconds or less. What could delay the OPTIONS request in NGINX? Thanks? Roger From nginx-forum at forum.nginx.org Wed Jan 23 02:37:13 2019 From: nginx-forum at forum.nginx.org (Roar) Date: Tue, 22 Jan 2019 21:37:13 -0500 Subject: grpc keepalive does not effect, nginx will close connection by the minute In-Reply-To: References: Message-ID: <8694a206ddd5cd7d7e01a2005df10416.NginxMailingListEnglish@forum.nginx.org> Thanks Sergey Kandaurov. The second problem is that I set grpc_read_timeout and grpc_send_timeout but it seems does not take effect. I tested many times and found that if the read_timeout less than default 60s, then it works. But it has no effect when read_timeout more than 60s, nginx will automatically close the connection between nginx and grpc server by every 60s. How to configure parameters to fix this issue? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282769,282788#msg-282788 From pluknet at nginx.com Wed Jan 23 10:56:44 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 23 Jan 2019 13:56:44 +0300 Subject: grpc keepalive does not effect, nginx will close connection by the minute In-Reply-To: <8694a206ddd5cd7d7e01a2005df10416.NginxMailingListEnglish@forum.nginx.org> References: <8694a206ddd5cd7d7e01a2005df10416.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3340EF50-B92A-4859-B464-0A5937D46235@nginx.com> > On 23 Jan 2019, at 05:37, Roar wrote: > > Thanks Sergey Kandaurov. > The second problem is that I set grpc_read_timeout and grpc_send_timeout but > it seems does not take effect. I tested many times and found that if the > read_timeout less than default 60s, then it works. But it has no effect when > read_timeout more than 60s, nginx will automatically close the connection > between nginx and grpc server by every 60s. How to configure parameters to > fix this issue? There're different timeouts when communicating with gRPC server, but only grpc_read_timeout is set when reading a gRPC response. Please double-check to make sure: 1) there are no external events that result in connection close; 2) connection is not closed by another timeout such as keepalive timeout, after a whole response had been read. To sort out what's going on, it is useful to have debug log, see http://nginx.org/en/docs/debugging_log.html -- Sergey Kandaurov From mdounin at mdounin.ru Wed Jan 23 13:23:17 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2019 16:23:17 +0300 Subject: Slow OPTIONS requests with reverse proxy cache In-Reply-To: <0B54441D-B814-404C-9807-3D16F1455CAF@netskrt.io> References: <0B54441D-B814-404C-9807-3D16F1455CAF@netskrt.io> Message-ID: <20190123132316.GH1877@mdounin.ru> Hello! On Tue, Jan 22, 2019 at 04:48:23PM -0800, Roger Fischer wrote: > I have noticed that the response to OPTIONS requests via a > reverse proxy cache are quite slow. The browser reports 600 ms > or more of idle time until NGINX provides the response. I am > using the community edition of NGINX, so I don?t have any timing > for the upstream request. You do have timing for the upstream request, see the following variables: http://nginx.org/r/$upstream_response_time http://nginx.org/r/$upstream_connect_time http://nginx.org/r/$upstream_header_time You can configure logging of these variables using the log_format directive. It is also a good idea to configure logging of generic request processing time, $request_time. -- Maxim Dounin http://mdounin.ru/ From roger at netskrt.io Wed Jan 23 17:03:33 2019 From: roger at netskrt.io (Roger Fischer) Date: Wed, 23 Jan 2019 09:03:33 -0800 Subject: Slow OPTIONS requests with reverse proxy cache In-Reply-To: <20190123132316.GH1877@mdounin.ru> References: <0B54441D-B814-404C-9807-3D16F1455CAF@netskrt.io> <20190123132316.GH1877@mdounin.ru> Message-ID: <4E4A7548-6846-4182-B761-C8F6F57889B5@netskrt.io> I am using the community version of NGINX, where these variables are not available (I was quite disappointed by that). Roger > On Jan 23, 2019, at 5:23 AM, Maxim Dounin wrote: > > Hello! > > On Tue, Jan 22, 2019 at 04:48:23PM -0800, Roger Fischer wrote: > >> I have noticed that the response to OPTIONS requests via a >> reverse proxy cache are quite slow. The browser reports 600 ms >> or more of idle time until NGINX provides the response. I am >> using the community edition of NGINX, so I don?t have any timing >> for the upstream request. > > You do have timing for the upstream request, see the following > variables: > > http://nginx.org/r/$upstream_response_time > http://nginx.org/r/$upstream_connect_time > http://nginx.org/r/$upstream_header_time > > You can configure logging of these variables using the log_format > directive. It is also a good idea to configure logging of generic > request processing time, $request_time. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Wed Jan 23 18:00:15 2019 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 23 Jan 2019 21:00:15 +0300 Subject: Slow OPTIONS requests with reverse proxy cache In-Reply-To: <4E4A7548-6846-4182-B761-C8F6F57889B5@netskrt.io> References: <0B54441D-B814-404C-9807-3D16F1455CAF@netskrt.io> <20190123132316.GH1877@mdounin.ru> <4E4A7548-6846-4182-B761-C8F6F57889B5@netskrt.io> Message-ID: <20190123180015.GK1877@mdounin.ru> Hello! On Wed, Jan 23, 2019 at 09:03:33AM -0800, Roger Fischer wrote: > I am using the community version of NGINX, where these variables > are not available (I was quite disappointed by that). Again: these variables are available in all known variants of nginx. In particular, the $upstream_response_time variable was introduced in nginx 0.3.8, released in 2005, and available even in really ancient nginx versions. -- Maxim Dounin http://mdounin.ru/ From roger at netskrt.io Thu Jan 24 02:02:09 2019 From: roger at netskrt.io (Roger Fischer) Date: Wed, 23 Jan 2019 18:02:09 -0800 Subject: Slow OPTIONS requests with reverse proxy cache In-Reply-To: <20190123180015.GK1877@mdounin.ru> References: <0B54441D-B814-404C-9807-3D16F1455CAF@netskrt.io> <20190123132316.GH1877@mdounin.ru> <4E4A7548-6846-4182-B761-C8F6F57889B5@netskrt.io> <20190123180015.GK1877@mdounin.ru> Message-ID: Oops, my mistake. I was looking at the wrong logs. The upstream connect and response times are indeed reported. Now that I have analyzed the timing logs, I see that there is a networking issue between the proxy and the origin. Roger > On Jan 23, 2019, at 10:00 AM, Maxim Dounin wrote: > > Hello! > > On Wed, Jan 23, 2019 at 09:03:33AM -0800, Roger Fischer wrote: > >> I am using the community version of NGINX, where these variables >> are not available (I was quite disappointed by that). > > Again: these variables are available in all known variants of > nginx. In particular, the $upstream_response_time variable was > introduced in nginx 0.3.8, released in 2005, and available even in > really ancient nginx versions. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Thu Jan 24 03:05:21 2019 From: nginx-forum at forum.nginx.org (Roar) Date: Wed, 23 Jan 2019 22:05:21 -0500 Subject: grpc keepalive does not effect, nginx will close connection by the minute In-Reply-To: <3340EF50-B92A-4859-B464-0A5937D46235@nginx.com> References: <3340EF50-B92A-4859-B464-0A5937D46235@nginx.com> Message-ID: <34c1b938ba6ce5d2100c757b9fae77a1.NginxMailingListEnglish@forum.nginx.org> I recompile nginx with debug module follow your instruction and check all keepalive_timeout parameter. I tested my application at 2019/01/24 10:49:53 and it responsed correctly, and the connection between nginx and grpc server was closed as expected at 2019/01/24 10:50:53. Oops! What sad is that I don't understand the output nginx debug log. Could you help me analyze the log? Here is my nginx,application conf and debug output log: ?my nginx.conf? #user nobody; worker_processes 2; error_log logs/error.log debug; pid ./nginx.pid; events { worker_connections 1024; } http { include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 120; server { listen 80; server_name localhost; location / { root html; index index.html index.htm; } error_page 500 502 503 504 /50x.html; location = /50x.html { root html; } } include /export/apps/middlewares/confd/conf.d/application.conf; } ?my application.conf? upstream ID_PUMPER { server 127.0.0.1:58548; } server { listen 8080 http2; grpc_read_timeout 300s; grpc_send_timeout 300s; location /utoProto.idProduce.IdProduce { grpc_pass grpc://ID_PUMPER; } } ?Debug output log? 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 2019/01/24 10:49:53 [debug] 36108#0: kevent events: 1 2019/01/24 10:49:53 [debug] 36109#0: kevent: 10: ft:-1 fl:0005 ff:00000000 d:1 ud:00007FBA2F01A868 2019/01/24 10:49:53 [debug] 36108#0: kevent: 10: ft:-1 fl:0005 ff:00000000 d:1 ud:00007FBA2E02FC68 2019/01/24 10:49:53 [debug] 36109#0: accept on 0.0.0.0:8080, ready: 1 2019/01/24 10:49:53 [debug] 36108#0: accept on 0.0.0.0:8080, ready: 1 2019/01/24 10:49:53 [debug] 36108#0: accept() not ready (35: Resource temporarily unavailable) 2019/01/24 10:49:53 [debug] 36108#0: timer delta: 16721 2019/01/24 10:49:53 [debug] 36108#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: posix_memalign: 00007FBA2D500D90:512 @16 2019/01/24 10:49:53 [debug] 36108#0: kevent timer: -1, changes: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 accept: 127.0.0.1:50905 fd:3 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 3: 60000:266893796 2019/01/24 10:49:53 [debug] 36109#0: *10 reusable connection: 1 2019/01/24 10:49:53 [debug] 36109#0: *10 kevent set event: 3: ft:-1 fl:0025 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 16721 2019/01/24 10:49:53 [debug] 36109#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 60000, changes: 1 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 2019/01/24 10:49:53 [debug] 36109#0: kevent: 3: ft:-1 fl:0025 ff:00000000 d:99 ud:00007FBA2F01A938 2019/01/24 10:49:53 [debug] 36109#0: *10 init http2 connection 2019/01/24 10:49:53 [debug] 36109#0: malloc: 000000010737C000:262144 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2D700180:512 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2F003000:4096 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 add cleanup: 00007FBA2D500E98 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2D700580:512 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 send SETTINGS frame 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 send WINDOW_UPDATE frame sid:0, window:2147418112 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 read handler 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:99, err:0 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:3 368 of 262112 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 preface verified 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:4 f:0 l:36 sid:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 SETTINGS frame 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 2:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 3:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 4:4194304 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 5:4194304 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 6:8192 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 65027:1 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete pos:000000010737C045 end:000000010737C170 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:8 f:0 l:4 sid:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 WINDOW_UPDATE frame sid:0 window:4128769 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete pos:000000010737C052 end:000000010737C170 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:6 f:0 l:8 sid:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 PING frame 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete pos:000000010737C063 end:000000010737C170 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:1 f:4 l:242 sid:1 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 HEADERS frame sid:1 depends on 0 excl:0 weight:16 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2E02E600:1024 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2E02EA00:4096 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2E800400:4096 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 get indexed header: 6 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":scheme: http" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 get indexed header: 3 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":method: POST" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:10 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:14 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: ":authority: 127.0.0.1:8080" 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2D6005B0:512 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2E801400:4096 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 56 free:4096 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":authority: 127.0.0.1:8080" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:5 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:42 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: ":path: /utoProto.idProduce.IdProduce/getUniqueIds" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 79 free:4040 2019/01/24 10:49:53 [debug] 36109#0: *10 http uri: "/utoProto.idProduce.IdProduce/getUniqueIds" 2019/01/24 10:49:53 [debug] 36109#0: *10 http args: "" 2019/01/24 10:49:53 [debug] 36109#0: *10 http exten: "" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":path: /utoProto.idProduce.IdProduce/getUniqueIds" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:2 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:8 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "te: trailers" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 42 free:3961 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "te: trailers" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:12 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:16 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "content-type: application/grpc" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 60 free:3919 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "content-type: application/grpc" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:10 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:31 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "user-agent: grpc-c/6.0.0 (osx; chttp2; gao)" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 73 free:3859 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "user-agent: grpc-c/6.0.0 (osx; chttp2; gao)" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:20 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:21 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "grpc-accept-encoding: identity,deflate,gzip" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 73 free:3786 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "grpc-accept-encoding: identity,deflate,gzip" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:15 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:13 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "accept-encoding: identity,gzip" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 60 free:3713 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "accept-encoding: identity,gzip" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 request line: "POST /utoProto.idProduce.IdProduce/getUniqueIds HTTP/2.0" 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 rewrite phase: 1 2019/01/24 10:49:53 [debug] 36109#0: *10 test location: "/utoProto.idProduce.IdProduce" 2019/01/24 10:49:53 [debug] 36109#0: *10 using configuration "/utoProto.idProduce.IdProduce" 2019/01/24 10:49:53 [debug] 36109#0: *10 http cl:-1 max:1048576 2019/01/24 10:49:53 [debug] 36109#0: *10 rewrite phase: 3 2019/01/24 10:49:53 [debug] 36109#0: *10 post rewrite phase: 4 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 5 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 6 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 7 2019/01/24 10:49:53 [debug] 36109#0: *10 access phase: 8 2019/01/24 10:49:53 [debug] 36109#0: *10 access phase: 9 2019/01/24 10:49:53 [debug] 36109#0: *10 post access phase: 10 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 11 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 12 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2D837800:65536 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 3: 60000:266893810 2019/01/24 10:49:53 [debug] 36109#0: *10 http init upstream, client timer: 1 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":method: POST" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":scheme: http" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":path: /utoProto.idProduce.IdProduce/getUniqueIds" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":authority: ID_PUMPER" 2019/01/24 10:49:53 [debug] 36109#0: *10 http script copy: "" 2019/01/24 10:49:53 [debug] 36109#0: *10 http script copy: "TE" 2019/01/24 10:49:53 [debug] 36109#0: *10 http script var: "trailers" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "te: trailers" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "content-type: application/grpc" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "user-agent: grpc-c/6.0.0 (osx; chttp2; gao)" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "grpc-accept-encoding: identity,deflate,gzip" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "accept-encoding: identity,gzip" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: 505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00001204000000000000010000000000020000000000047fffffff0000040800000000007fff00000000aa0104000000018386449f62d49f5d8749d7349aec3c9690abe4935d8792d215898a9e151bb5a5c9223f4188c97e2d7c346bc1b700027465864d833505b11f008921ea496a4ac9f5597f8b1d75d0620d263d4c4d65640087b505b161cc5a93989acac8b11871702e053fa3a3cfda849d29ac5f6a4c33ff7f008e9acac8b0c842d6958b510f21aa9b903485a9264fafa90b2d03497ea6f66aff008b19085ad2b16a21e435537f8a3485a9264fafa9bd9abf, len: 243 2019/01/24 10:49:53 [debug] 36109#0: *10 http cleanup add: 00007FBA2E801220 2019/01/24 10:49:53 [debug] 36109#0: *10 get rr peer, try: 1 2019/01/24 10:49:53 [debug] 36109#0: *10 stream socket 5 2019/01/24 10:49:53 [debug] 36109#0: *10 connect to 127.0.0.1:58548, fd:5 #11 2019/01/24 10:49:53 [debug] 36109#0: *10 kevent set event: 5: ft:-1 fl:0025 2019/01/24 10:49:53 [debug] 36109#0: *10 kevent set event: 5: ft:-2 fl:0025 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream connect: -2 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2D500BB0:128 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 5: 60000:266893810 2019/01/24 10:49:53 [debug] 36109#0: *10 http finalize request: -4, "/utoProto.idProduce.IdProduce/getUniqueIds?" a:1, c:2 2019/01/24 10:49:53 [debug] 36109#0: *10 http request count:2 blk:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete pos:000000010737C15E end:000000010737C170 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:0 f:0 l:9 sid:1 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 DATA frame 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer: 3, old: 266893810, new: 266893810 2019/01/24 10:49:53 [debug] 36109#0: *10 post event 00007FBA2F003400 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete pos:000000010737C170 end:000000010737C170 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003268 sid:0 bl:0 len:8 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F0031B8 sid:0 bl:0 len:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003108 sid:0 bl:0 len:4 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003050 sid:0 bl:0 len:18 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 66 of 66 2019/01/24 10:49:53 [debug] 36109#0: *10 tcp_nodelay 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003050 sid:0 bl:0 len:18 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003108 sid:0 bl:0 len:4 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F0031B8 sid:0 bl:0 len:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003268 sid:0 bl:0 len:8 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer del: 3: 266893796 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 14 2019/01/24 10:49:53 [debug] 36109#0: posted event 00007FBA2F003400 2019/01/24 10:49:53 [debug] 36109#0: *10 delete posted event 00007FBA2F003400 2019/01/24 10:49:53 [debug] 36109#0: *10 http run request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream check client, write event:0, "/utoProto.idProduce.IdProduce/getUniqueIds" 2019/01/24 10:49:53 [debug] 36109#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 60000, changes: 2 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 2 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-1 fl:0025 ff:00000000 d:9 ud:00007FBA2F01A9A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process header 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2E802400:4096 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2E00A600:4096 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:9, err:0 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:5 9 of 4096 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc response: 000000040000000000, len: 9 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: 00007FBA2D500C30:128 @16 2019/01/24 10:49:53 [debug] 36109#0: *10 add cleanup: 00007FBA2D500C00 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 4, len: 0, f:0, i:0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc send settings ack 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:0, err:0 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-2 fl:0025 ff:00000000 d:146988 ud:00007FBA2F0349A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body 2019/01/24 10:49:53 [debug] 36109#0: *10 tcp_nodelay 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output header 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65535 w:65535:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 00007FBA2E801008, pos 00007FBA2E801008, size: 243 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 00007FBA2E00A6D0, pos 00007FBA2E00A6D0, size: 9 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65535 w:65535:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:243 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:9 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 00007FBA2E00A700 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 252 of 252 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer out: 0000000000000000 2019/01/24 10:49:53 [debug] 36109#0: *10 http body new buf t:1 f:0 00007FBA2D837800, pos 00007FBA2D837800, size: 9 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65535 w:65535:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output in l:0 f:0 00007FBA2D837800, pos 00007FBA2D837800, size: 9 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 00007FBA2E00A6D0, pos 00007FBA2E00A6D0, size: 9 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 0000000000000000, pos 00007FBA2D837800, size: 9 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65526 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:9 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:9 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 00007FBA2E00A7C0 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 18 of 18 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer out: 0000000000000000 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 send WINDOW_UPDATE frame sid:1, window:9 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003268 sid:0 bl:0 len:4 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 13 of 13 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003268 sid:0 bl:0 len:4 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer del: 5: 266893810 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 4 2019/01/24 10:49:53 [debug] 36109#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59996, changes: 0 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 2 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-2 fl:0025 ff:00000000 d:146988 ud:00007FBA2F0349A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65526 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65526 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 0000000000000000 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-1 fl:0025 ff:00000000 d:39 ud:00007FBA2F01A9A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process header 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:39, err:0 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:5 39 of 4096 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc response: 0000000401000000000000040800000000000000000900000806000000000002041010090e0707, len: 39 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 4, len: 0, f:1, i:0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc settings ack 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 8, len: 4, f:0, i:0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc window update: 9 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 6, len: 8, f:0, i:0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc ping 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc send ping ack 2019/01/24 10:49:53 [debug] 36109#0: *10 post event 00007FBA2F0349A0 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:0, err:0 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 1 2019/01/24 10:49:53 [debug] 36109#0: posted event 00007FBA2F0349A0 2019/01/24 10:49:53 [debug] 36109#0: *10 delete posted event 00007FBA2F0349A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 00007FBA2E00A7E0, pos 00007FBA2E00A7E0, size: 17 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:17 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 00007FBA2E00A6F0 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 17 of 17 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer out: 0000000000000000 2019/01/24 10:49:53 [debug] 36109#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59995, changes: 0 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-2 fl:0025 ff:00000000 d:146988 ud:00007FBA2F0349A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 w:65526:65535 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 0000000000000000 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 1 2019/01/24 10:49:53 [debug] 36109#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59994, changes: 0 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-1 fl:0025 ff:00000000 d:131 ud:00007FBA2F01A9A0 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process header 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:131, err:0 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:5 131 of 4096 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc response: 00001001040000000120880f108b1d75d0620d263d4c4d6564000061000000000001000000005c125ac6c0faa8f69b8fa404c7c0faa8f69b8fa404c8c0faa8f69b8fa404c9c0faa8f69b8fa404cac0faa8f69b8fa404cbc0faa8f69b8fa404ccc0faa8f69b8fa404cdc0faa8f69b8fa404cec0faa8f69b8fa404cfc0faa8f69b8fa404, len: 131 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 1, len: 16, f:4, i:1 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc parse header: start 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc table size update: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc indexed header: 8 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":status: 200" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header index: 31 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc value: len:11 h:1 last:11, rest:11 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "content-type: application/grpc" 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header done 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header filter 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: ":status: 200" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: "server: nginx/1.15.8" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: "date: Thu, 24 Jan 2019 02:49:53 GMT" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: "content-type: application/grpc" 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 create HEADERS frame 00007FBA2E00A898: len:49 fin:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http cleanup add: 00007FBA2E00A980 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2E00A898 sid:1 bl:1 len:49 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 58 of 58 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 HEADERS frame 00007FBA2E00A898 was sent 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2E00A898 sid:1 bl:1 len:49 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc filter bytes:106 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 0, len: 97, f:0, i:1 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output buf 00007FBA2E802422 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process non buffered downstream 2019/01/24 10:49:53 [debug] 36109#0: *10 http output filter "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http copy filter: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 http postpone filter "/utoProto.idProduce.IdProduce/getUniqueIds?" 00007FBA2E00A7C0 2019/01/24 10:49:53 [debug] 36109#0: *10 write new buf t:0 f:0 0000000000000000, pos 00007FBA2E802422, size: 97 file: 0, size: 0 2019/01/24 10:49:53 [debug] 36109#0: *10 http write filter: l:0 f:1 s:97 2019/01/24 10:49:53 [debug] 36109#0: *10 http write filter limit 0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 windows: conn:4194304 stream:4194304 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 create DATA frame 00007FBA2E00A898: len:97 flags:0 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2E00A898 sid:1 bl:0 len:97 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 106 of 106 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 DATA frame 00007FBA2E00A898 was sent 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2E00A898 sid:1 bl:0 len:97 2019/01/24 10:49:53 [debug] 36109#0: *10 http write filter 0000000000000000 2019/01/24 10:49:53 [debug] 36109#0: *10 http copy filter: 0 "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 5: 300000:267133817 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 1 2019/01/24 10:49:53 [debug] 36109#0: worker cycle 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59993, changes: 0 2019/01/24 10:50:53 [debug] 36109#0: kevent events: 0 2019/01/24 10:50:53 [debug] 36109#0: timer delta: 59995 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer del: 3: 266893810 2019/01/24 10:50:53 [debug] 36109#0: *10 http run request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:50:53 [debug] 36109#0: *10 http upstream read request handler 2019/01/24 10:50:53 [debug] 36109#0: *10 finalize http upstream request: 408 2019/01/24 10:50:53 [debug] 36109#0: *10 finalize grpc request 2019/01/24 10:50:53 [debug] 36109#0: *10 free rr peer 1 0 2019/01/24 10:50:53 [debug] 36109#0: *10 close http upstream connection: 5 2019/01/24 10:50:53 [debug] 36109#0: *10 run cleanup: 00007FBA2D500C00 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2D500BB0, unused: 24 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2D500C30, unused: 64 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer del: 5: 267133817 2019/01/24 10:50:53 [debug] 36109#0: *10 reusable connection: 0 2019/01/24 10:50:53 [debug] 36109#0: *10 http finalize request: 408, "/utoProto.idProduce.IdProduce/getUniqueIds?" a:1, c:1 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate request count:1 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate cleanup count:1 blk:0 2019/01/24 10:50:53 [debug] 36109#0: *10 http posted request: "/utoProto.idProduce.IdProduce/getUniqueIds?" 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate handler count:1 2019/01/24 10:50:53 [debug] 36109#0: *10 http request count:1 blk:0 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 close stream 1, queued 0, processing 1, pushing 0 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 send RST_STREAM frame sid:1, status:1 2019/01/24 10:50:53 [debug] 36109#0: *10 http close request 2019/01/24 10:50:53 [debug] 36109#0: *10 http log handler 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E802400 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2D837800 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E02EA00, unused: 0 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E800400, unused: 0 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E00A600, unused: 2778 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E02E600, unused: 711 2019/01/24 10:50:53 [debug] 36109#0: *10 post event 00007FBA2F01A938 2019/01/24 10:50:53 [debug] 36109#0: posted event 00007FBA2F01A938 2019/01/24 10:50:53 [debug] 36109#0: *10 delete posted event 00007FBA2F01A938 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 handle connection handler 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003268 sid:0 bl:0 len:4 2019/01/24 10:50:53 [debug] 36109#0: *10 writev: 13 of 13 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003268 sid:0 bl:0 len:4 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2F003000, unused: 2760 2019/01/24 10:50:53 [debug] 36109#0: *10 reusable connection: 1 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer add: 3: 180000:267073812 2019/01/24 10:50:53 [debug] 36109#0: worker cycle 2019/01/24 10:50:53 [debug] 36109#0: kevent timer: 180000, changes: 0 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282769,282806#msg-282806 From francis at daoine.org Thu Jan 24 08:58:32 2019 From: francis at daoine.org (Francis Daly) Date: Thu, 24 Jan 2019 08:58:32 +0000 Subject: Use sub-url to identify the different server In-Reply-To: <67435a781771b65c1bf14fb746972267.NginxMailingListEnglish@forum.nginx.org> References: <20190114140736.wuhys7fkatzco7ed@daoine.org> <67435a781771b65c1bf14fb746972267.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20190124085832.cntioa53ks3cxycr@daoine.org> On Sun, Jan 20, 2019 at 10:13:05PM -0500, nevereturn01 wrote: Hi there, > Now, we are running a small business. So we don't have any load-balance or > fail-over deployment. So far, we have only 1 Nginx + 1 serverA + 1 serverB. So - you have internal serverA, which has its own content at https://serverA/. You have external nginx, and you want to reverse-proxy serverA behind the url https://nginx/A/. You also have internal serverB, which should be reverse-proxied behind https://nginx/B/. And you can test whether this works by pointing a web browser at https://nginx/A/something, and seeing if you get the same result as when you point a web browser at https://serverA/something. For reasons of simplicity, it is often easier to test this by using the command-line tool "curl" instead of a "full" web browser. You will need to know what response you expect to get, before you can know whether the response that you get is correct. > As for the rewrite, I did some research and found that it seems that only > URL rewrite can help in my scenario. Technically, do you think the URL > rewrite rules is correct? Maybe there are some syntax errors? As far as I know, the nginx-way to do what you want is location /A/ { proxy_pass https://serverA/; } I do not know if you have tried that and seen that it does or does not work for you. You have tried a different configuration, something like location /A { rewrite ^/A/(.*) /$1 break; proxy_pass https://serverA; } and have done an unspecified test and got a http 404 response that was not what you expected. I think that the rewrite config is probably not correct; I would expect it to start with "location /A/". The extra / characters and the like do matter here. Only you can test your system. When you have one specific configuration in place, issue one request and look at the response. If things work the way you want, all is good. If something does not work, report the one specific request that does not work; report the response that the request does get; and indicate the response that you want that request to get instead. Note also, that if the http response body (the html content) from https://serverA/something includes a direct link to /other, then the end client will probably make a request for https://nginx/other, which will not do what you want. It is not nginx's job to edit the response body to change a link to /other into a link to /A/other. Instead, if serverA wants to be reverse-proxied to a sub-url, it is serverA-and-the-author's job to ensure that the response body contains a link to "other" or to "../other", as appropriate. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Jan 24 14:35:15 2019 From: nginx-forum at forum.nginx.org (blason) Date: Thu, 24 Jan 2019 09:35:15 -0500 Subject: How to implement below config on nginx Reverse Proxy mode Message-ID: Hi Team, My nginx is configured in Reverse Proxy mode and catering to internet. However I have been tasked to add one e server with below config and I am facing difficulty while putting that in production. My scenario is I have internal server which accessed then gets diverted on to other port and long URL Internal IP address/URL is and mapped to external URL as https://xyz.example.com https://xyz.example.com Once this URL accessed it get turned into https://xyz.example.com:8443/PortalLgin/action.do?portal=a8550fd2-24bb-11e6-a111 I have tried configuring Proxy_pass as https://xyz.example.com:8443/PortalLgin/action.do?portal=a8550fd2-24bb-11e6-a111 But no luck Tried listen Directive 8443 still no luck Can someone please suggest? Thanks and Regards, Blason R Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282809,282809#msg-282809 From pluknet at nginx.com Thu Jan 24 15:41:40 2019 From: pluknet at nginx.com (Sergey Kandaurov) Date: Thu, 24 Jan 2019 18:41:40 +0300 Subject: grpc keepalive does not effect, nginx will close connection by the minute In-Reply-To: <34c1b938ba6ce5d2100c757b9fae77a1.NginxMailingListEnglish@forum.nginx.org> References: <3340EF50-B92A-4859-B464-0A5937D46235@nginx.com> <34c1b938ba6ce5d2100c757b9fae77a1.NginxMailingListEnglish@forum.nginx.org> Message-ID: > On 24 Jan 2019, at 06:05, Roar wrote: > > I recompile nginx with debug module follow your instruction and check all > keepalive_timeout parameter. > I tested my application at 2019/01/24 10:49:53 and it responsed correctly, > and the connection between nginx and grpc server was closed as expected at > 2019/01/24 10:50:53. Oops! Ok, looking at debug log I see that keepalive settings don't apply for your case since connection is closed during request processing. > What sad is that I don't understand the output nginx debug log. Could you > help me analyze the log? > TL;DR you need to tune client_body_timeout. Looks like this is some sort of HTTP long polling. Here's incomplete client request body seen in debug log. While it's being received, an additional timer is installed, which is controlled by the client_body_timeout directive (60s by default). Since the next part of client request body was not received in time, the connection is closed by timeout. See comments inline for more details. > Here is my nginx,application conf and debug output log: > > ?my nginx.conf? > #user nobody; > worker_processes 2; > error_log logs/error.log debug; > pid ./nginx.pid; > > events { > worker_connections 1024; > } > > http { > include mime.types; > default_type application/octet-stream; > > sendfile on; > keepalive_timeout 120; > > server { > listen 80; > server_name localhost; > > location / { > root html; > index index.html index.htm; > } > > error_page 500 502 503 504 /50x.html; > location = /50x.html { > root html; > } > } > > include /export/apps/middlewares/confd/conf.d/application.conf; > } > > > ?my application.conf? > upstream ID_PUMPER { > server 127.0.0.1:58548; > } > > server { > listen 8080 http2; > grpc_read_timeout 300s; > grpc_send_timeout 300s; > > location /utoProto.idProduce.IdProduce { > grpc_pass grpc://ID_PUMPER; > } > } > > > ?Debug output log? > 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 > 2019/01/24 10:49:53 [debug] 36108#0: kevent events: 1 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 10: ft:-1 fl:0005 ff:00000000 > d:1 ud:00007FBA2F01A868 > 2019/01/24 10:49:53 [debug] 36108#0: kevent: 10: ft:-1 fl:0005 ff:00000000 > d:1 ud:00007FBA2E02FC68 > 2019/01/24 10:49:53 [debug] 36109#0: accept on 0.0.0.0:8080, ready: 1 > 2019/01/24 10:49:53 [debug] 36108#0: accept on 0.0.0.0:8080, ready: 1 > 2019/01/24 10:49:53 [debug] 36108#0: accept() not ready (35: Resource > temporarily unavailable) > 2019/01/24 10:49:53 [debug] 36108#0: timer delta: 16721 > 2019/01/24 10:49:53 [debug] 36108#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: posix_memalign: 00007FBA2D500D90:512 > @16 > 2019/01/24 10:49:53 [debug] 36108#0: kevent timer: -1, changes: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 accept: 127.0.0.1:50905 fd:3 > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 3: > 60000:266893796 client_header_timeout is installed, to be removed after the entire header is read > 2019/01/24 10:49:53 [debug] 36109#0: *10 reusable connection: 1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 kevent set event: 3: ft:-1 fl:0025 > 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 16721 > 2019/01/24 10:49:53 [debug] 36109#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 60000, changes: 1 > 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 3: ft:-1 fl:0025 ff:00000000 > d:99 ud:00007FBA2F01A938 > 2019/01/24 10:49:53 [debug] 36109#0: *10 init http2 connection > 2019/01/24 10:49:53 [debug] 36109#0: malloc: 000000010737C000:262144 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2D700180:512 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2F003000:4096 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 add cleanup: 00007FBA2D500E98 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2D700580:512 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 send SETTINGS frame > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 send WINDOW_UPDATE frame > sid:0, window:2147418112 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 read handler > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:99, err:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:3 368 of 262112 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 preface verified > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:4 f:0 l:36 sid:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 SETTINGS frame > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 2:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 3:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 4:4194304 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 5:4194304 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 6:8192 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 setting 65027:1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete > pos:000000010737C045 end:000000010737C170 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:8 f:0 l:4 sid:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 WINDOW_UPDATE frame sid:0 > window:4128769 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete > pos:000000010737C052 end:000000010737C170 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:6 f:0 l:8 sid:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 PING frame > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete > pos:000000010737C063 end:000000010737C170 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:1 f:4 l:242 sid:1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 HEADERS frame sid:1 depends > on 0 excl:0 weight:16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2E02E600:1024 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2E02EA00:4096 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2E800400:4096 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 get indexed header: 6 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":scheme: http" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 get indexed header: 3 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":method: POST" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:10 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:14 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: ":authority: > 127.0.0.1:8080" > 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2D6005B0:512 > 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2E801400:4096 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 56 free:4096 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":authority: > 127.0.0.1:8080" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:5 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:42 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: ":path: > /utoProto.idProduce.IdProduce/getUniqueIds" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 79 free:4040 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http uri: > "/utoProto.idProduce.IdProduce/getUniqueIds" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http args: "" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http exten: "" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: ":path: > /utoProto.idProduce.IdProduce/getUniqueIds" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:2 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:8 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "te: trailers" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 42 free:3961 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "te: trailers" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:12 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "content-type: > application/grpc" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 60 free:3919 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "content-type: > application/grpc" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:10 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:31 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "user-agent: > grpc-c/6.0.0 (osx; chttp2; gao)" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 73 free:3859 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "user-agent: > grpc-c/6.0.0 (osx; chttp2; gao)" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:20 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:21 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: > "grpc-accept-encoding: identity,deflate,gzip" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 73 free:3786 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: > "grpc-accept-encoding: identity,deflate,gzip" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:15 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 raw string, len:13 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table add: "accept-encoding: > identity,gzip" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 table account: 60 free:3713 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header: "accept-encoding: > identity,gzip" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 request line: "POST > /utoProto.idProduce.IdProduce/getUniqueIds HTTP/2.0" > 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 rewrite phase: 1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 test location: > "/utoProto.idProduce.IdProduce" > 2019/01/24 10:49:53 [debug] 36109#0: *10 using configuration > "/utoProto.idProduce.IdProduce" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http cl:-1 max:1048576 > 2019/01/24 10:49:53 [debug] 36109#0: *10 rewrite phase: 3 > 2019/01/24 10:49:53 [debug] 36109#0: *10 post rewrite phase: 4 > 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 5 > 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 6 > 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 7 > 2019/01/24 10:49:53 [debug] 36109#0: *10 access phase: 8 > 2019/01/24 10:49:53 [debug] 36109#0: *10 access phase: 9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 post access phase: 10 > 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 11 > 2019/01/24 10:49:53 [debug] 36109#0: *10 generic phase: 12 > 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2D837800:65536 > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 3: > 60000:266893810 client_body_timeout is installed since we're awaiting client body > 2019/01/24 10:49:53 [debug] 36109#0: *10 http init upstream, client timer: > 1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":method: POST" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":scheme: http" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":path: > /utoProto.idProduce.IdProduce/getUniqueIds" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":authority: > ID_PUMPER" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http script copy: "" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http script copy: "TE" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http script var: "trailers" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "te: trailers" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "content-type: > application/grpc" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "user-agent: > grpc-c/6.0.0 (osx; chttp2; gao)" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "grpc-accept-encoding: > identity,deflate,gzip" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "accept-encoding: > identity,gzip" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: > 505249202a20485454502f322e300d0a0d0a534d0d0a0d0a00001204000000000000010000000000020000000000047fffffff0000040800000000007fff00000000aa0104000000018386449f62d49f5d8749d7349aec3c9690abe4935d8792d215898a9e151bb5a5c9223f4188c97e2d7c346bc1b700027465864d833505b11f008921ea496a4ac9f5597f8b1d75d0620d263d4c4d65640087b505b161cc5a93989acac8b11871702e053fa3a3cfda849d29ac5f6a4c33ff7f008e9acac8b0c842d6958b510f21aa9b903485a9264fafa90b2d03497ea6f66aff008b19085ad2b16a21e435537f8a3485a9264fafa9bd9abf, > len: 243 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http cleanup add: 00007FBA2E801220 > 2019/01/24 10:49:53 [debug] 36109#0: *10 get rr peer, try: 1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 stream socket 5 > 2019/01/24 10:49:53 [debug] 36109#0: *10 connect to 127.0.0.1:58548, fd:5 > #11 > 2019/01/24 10:49:53 [debug] 36109#0: *10 kevent set event: 5: ft:-1 fl:0025 > 2019/01/24 10:49:53 [debug] 36109#0: *10 kevent set event: 5: ft:-2 fl:0025 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream connect: -2 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2D500BB0:128 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 5: > 60000:266893810 grpc_connect_timeout is installed, to be removed after connection establishment > 2019/01/24 10:49:53 [debug] 36109#0: *10 http finalize request: -4, > "/utoProto.idProduce.IdProduce/getUniqueIds?" a:1, c:2 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http request count:2 blk:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete > pos:000000010737C15E end:000000010737C170 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame type:0 f:0 l:9 sid:1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 DATA frame > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer: 3, old: 266893810, > new: 266893810 client_body_timeout is renewed after receiving client body part the timer is not removed since this is not a final HTTP/2 DATA frame, f:0 shows HTTP/2 frame flags, and there's no END_STREAM flag (0x1) set. > 2019/01/24 10:49:53 [debug] 36109#0: *10 post event 00007FBA2F003400 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame complete > pos:000000010737C170 end:000000010737C170 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003268 > sid:0 bl:0 len:8 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F0031B8 > sid:0 bl:0 len:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003108 > sid:0 bl:0 len:4 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003050 > sid:0 bl:0 len:18 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 66 of 66 > 2019/01/24 10:49:53 [debug] 36109#0: *10 tcp_nodelay > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003050 > sid:0 bl:0 len:18 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003108 > sid:0 bl:0 len:4 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F0031B8 > sid:0 bl:0 len:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003268 > sid:0 bl:0 len:8 > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer del: 3: 266893796 client_header_timeout is removed > 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 14 > 2019/01/24 10:49:53 [debug] 36109#0: posted event 00007FBA2F003400 > 2019/01/24 10:49:53 [debug] 36109#0: *10 delete posted event > 00007FBA2F003400 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http run request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream check client, write > event:0, "/utoProto.idProduce.IdProduce/getUniqueIds" > 2019/01/24 10:49:53 [debug] 36109#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 60000, changes: 2 > 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 2 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-1 fl:0025 ff:00000000 > d:9 ud:00007FBA2F01A9A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process header > 2019/01/24 10:49:53 [debug] 36109#0: *10 malloc: 00007FBA2E802400:4096 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2E00A600:4096 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:9, err:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:5 9 of 4096 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc response: 000000040000000000, > len: 9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 posix_memalign: > 00007FBA2D500C30:128 @16 > 2019/01/24 10:49:53 [debug] 36109#0: *10 add cleanup: 00007FBA2D500C00 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 4, len: 0, f:0, i:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc send settings ack > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:0, err:0 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-2 fl:0025 ff:00000000 > d:146988 ud:00007FBA2F0349A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body > 2019/01/24 10:49:53 [debug] 36109#0: *10 tcp_nodelay > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output header > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65535 > w:65535:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 > 00007FBA2E801008, pos 00007FBA2E801008, size: 243 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 > 00007FBA2E00A6D0, pos 00007FBA2E00A6D0, size: 9 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65535 > w:65535:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:243 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 00007FBA2E00A700 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 252 of 252 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer out: 0000000000000000 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http body new buf t:1 f:0 > 00007FBA2D837800, pos 00007FBA2D837800, size: 9 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65535 > w:65535:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output in l:0 f:0 > 00007FBA2D837800, pos 00007FBA2D837800, size: 9 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 > 00007FBA2E00A6D0, pos 00007FBA2E00A6D0, size: 9 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 > 0000000000000000, pos 00007FBA2D837800, size: 9 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65526 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 00007FBA2E00A7C0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 18 of 18 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer out: 0000000000000000 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 send WINDOW_UPDATE frame > sid:1, window:9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003268 > sid:0 bl:0 len:4 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 13 of 13 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003268 > sid:0 bl:0 len:4 > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer del: 5: 266893810 grpc_connect_timeout is removed > 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 4 > 2019/01/24 10:49:53 [debug] 36109#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59996, changes: 0 > 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 2 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-2 fl:0025 ff:00000000 > d:146988 ud:00007FBA2F0349A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65526 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65526 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 0000000000000000 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-1 fl:0025 ff:00000000 > d:39 ud:00007FBA2F01A9A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process header > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:39, err:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:5 39 of 4096 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc response: > 0000000401000000000000040800000000000000000900000806000000000002041010090e0707, > len: 39 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 4, len: 0, f:1, i:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc settings ack > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 8, len: 4, f:0, i:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc window update: 9 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 6, len: 8, f:0, i:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc ping > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc send ping ack > 2019/01/24 10:49:53 [debug] 36109#0: *10 post event 00007FBA2F0349A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:0, err:0 > 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 1 > 2019/01/24 10:49:53 [debug] 36109#0: posted event 00007FBA2F0349A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 delete posted event > 00007FBA2F0349A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output out l:0 f:0 > 00007FBA2E00A7E0, pos 00007FBA2E00A7E0, size: 17 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer buf fl:1 s:17 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 00007FBA2E00A6F0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 17 of 17 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer out: 0000000000000000 > 2019/01/24 10:49:53 [debug] 36109#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59995, changes: 0 > 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-2 fl:0025 ff:00000000 > d:146988 ud:00007FBA2F0349A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request handler > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream send request body > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output filter > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output limit: 65526 > w:65526:65535 > 2019/01/24 10:49:53 [debug] 36109#0: *10 chain writer in: 0000000000000000 > 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 1 > 2019/01/24 10:49:53 [debug] 36109#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59994, changes: 0 > 2019/01/24 10:49:53 [debug] 36109#0: kevent events: 1 > 2019/01/24 10:49:53 [debug] 36109#0: kevent: 5: ft:-1 fl:0025 ff:00000000 > d:131 ud:00007FBA2F01A9A0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process header > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: eof:0, avail:131, err:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 recv: fd:5 131 of 4096 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc response: > 00001001040000000120880f108b1d75d0620d263d4c4d6564000061000000000001000000005c125ac6c0faa8f69b8fa404c7c0faa8f69b8fa404c8c0faa8f69b8fa404c9c0faa8f69b8fa404cac0faa8f69b8fa404cbc0faa8f69b8fa404ccc0faa8f69b8fa404cdc0faa8f69b8fa404cec0faa8f69b8fa404cfc0faa8f69b8fa404, > len: 131 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 1, len: 16, f:4, i:1 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc parse header: start > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc table size update: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc indexed header: 8 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: ":status: 200" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header index: 31 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc value: len:11 h:1 last:11, > rest:11 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header: "content-type: > application/grpc" > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc header done > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 header filter > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: ":status: > 200" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: "server: > nginx/1.15.8" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: "date: Thu, 24 > Jan 2019 02:49:53 GMT" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 output header: "content-type: > application/grpc" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 create HEADERS frame > 00007FBA2E00A898: len:49 fin:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http cleanup add: 00007FBA2E00A980 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2E00A898 > sid:1 bl:1 len:49 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 58 of 58 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 HEADERS frame > 00007FBA2E00A898 was sent > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2E00A898 > sid:1 bl:1 len:49 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc filter bytes:106 > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc frame: 0, len: 97, f:0, i:1 gRPC response part is received, f:0 flag means it is not a final HTTP/2 DATA frame > 2019/01/24 10:49:53 [debug] 36109#0: *10 grpc output buf 00007FBA2E802422 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http upstream process non buffered > downstream > 2019/01/24 10:49:53 [debug] 36109#0: *10 http output filter > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http copy filter: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 http postpone filter > "/utoProto.idProduce.IdProduce/getUniqueIds?" 00007FBA2E00A7C0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 write new buf t:0 f:0 > 0000000000000000, pos 00007FBA2E802422, size: 97 file: 0, size: 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http write filter: l:0 f:1 s:97 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http write filter limit 0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 windows: conn:4194304 > stream:4194304 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 create DATA frame > 00007FBA2E00A898: len:97 flags:0 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2E00A898 > sid:1 bl:0 len:97 > 2019/01/24 10:49:53 [debug] 36109#0: *10 writev: 106 of 106 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2:1 DATA frame 00007FBA2E00A898 > was sent > 2019/01/24 10:49:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2E00A898 > sid:1 bl:0 len:97 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http write filter 0000000000000000 > 2019/01/24 10:49:53 [debug] 36109#0: *10 http copy filter: 0 > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:49:53 [debug] 36109#0: *10 event timer add: 5: > 300000:267133817 grpc_read_timeout is installed after response started to be received > 2019/01/24 10:49:53 [debug] 36109#0: timer delta: 1 > 2019/01/24 10:49:53 [debug] 36109#0: worker cycle > 2019/01/24 10:49:53 [debug] 36109#0: kevent timer: 59993, changes: 0 At this time two timers are active: - client_body_timeout, since client body is incomplete - grpc_read_timeout, gRPC response is incomplete. > 2019/01/24 10:50:53 [debug] 36109#0: kevent events: 0 > 2019/01/24 10:50:53 [debug] 36109#0: timer delta: 59995 The lowest of timers is fired, and the connection is closed since no more client request body parts were received for the next 60s. > 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer del: 3: 266893810 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http run request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:50:53 [debug] 36109#0: *10 http upstream read request handler > 2019/01/24 10:50:53 [debug] 36109#0: *10 finalize http upstream request: > 408 > 2019/01/24 10:50:53 [debug] 36109#0: *10 finalize grpc request > 2019/01/24 10:50:53 [debug] 36109#0: *10 free rr peer 1 0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 close http upstream connection: 5 > 2019/01/24 10:50:53 [debug] 36109#0: *10 run cleanup: 00007FBA2D500C00 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2D500BB0, unused: 24 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2D500C30, unused: 64 > 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer del: 5: 267133817 > 2019/01/24 10:50:53 [debug] 36109#0: *10 reusable connection: 0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http finalize request: 408, > "/utoProto.idProduce.IdProduce/getUniqueIds?" a:1, c:1 the request is finalized with "408 Request Time-out" > 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate request count:1 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate cleanup count:1 > blk:0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http posted request: > "/utoProto.idProduce.IdProduce/getUniqueIds?" > 2019/01/24 10:50:53 [debug] 36109#0: *10 http terminate handler count:1 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http request count:1 blk:0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 close stream 1, queued 0, > processing 1, pushing 0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 send RST_STREAM frame sid:1, > status:1 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http close request > 2019/01/24 10:50:53 [debug] 36109#0: *10 http log handler > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E802400 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2D837800 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E02EA00, unused: 0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E800400, unused: 0 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E00A600, unused: > 2778 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2E02E600, unused: > 711 > 2019/01/24 10:50:53 [debug] 36109#0: *10 post event 00007FBA2F01A938 > 2019/01/24 10:50:53 [debug] 36109#0: posted event 00007FBA2F01A938 > 2019/01/24 10:50:53 [debug] 36109#0: *10 delete posted event > 00007FBA2F01A938 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 handle connection handler > 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 frame out: 00007FBA2F003268 > sid:0 bl:0 len:4 > 2019/01/24 10:50:53 [debug] 36109#0: *10 writev: 13 of 13 > 2019/01/24 10:50:53 [debug] 36109#0: *10 http2 frame sent: 00007FBA2F003268 > sid:0 bl:0 len:4 > 2019/01/24 10:50:53 [debug] 36109#0: *10 free: 00007FBA2F003000, unused: > 2760 > 2019/01/24 10:50:53 [debug] 36109#0: *10 reusable connection: 1 > 2019/01/24 10:50:53 [debug] 36109#0: *10 event timer add: 3: > 180000:267073812 > 2019/01/24 10:50:53 [debug] 36109#0: worker cycle > 2019/01/24 10:50:53 [debug] 36109#0: kevent timer: 180000, changes: 0 -- Sergey Kandaurov From postmaster at palvelin.fi Thu Jan 24 17:15:04 2019 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Thu, 24 Jan 2019 19:15:04 +0200 Subject: User directive error Message-ID: <5A41E86E-D9AC-4100-9A19-3E8BF7DD1C18@palvelin.fi> Why does this error occur (Ubuntu 18.04/nginx 1.14.0)? nginx.conf:21 user www-data; error.log 2019/01/24 19:07:07 [warn] 3526#3526: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:21 # ps -axu |grep nginx root 3439 0.0 0.2 360156 9352 ? Ss 19:07 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; www-data 3441 0.1 0.5 364044 20716 ? S 19:07 0:00 nginx: worker process www-data 3442 0.2 0.5 364560 21248 ? S 19:07 0:00 nginx: worker process www-data 3443 0.0 0.3 362456 14852 ? S 19:07 0:00 nginx: cache manager process -- Palvelin.fi Hostmaster postmaster at palvelin.fi From nginx at netdirect.fr Thu Jan 24 23:02:32 2019 From: nginx at netdirect.fr (Artur) Date: Fri, 25 Jan 2019 00:02:32 +0100 Subject: Proxy with SSL offload and Nginx redirections Message-ID: Hello, In my setup I have a proxy with TLS offload with Nginx as a backend server (without TLS). If client asks an url such as https://host/dir, the proxy relays the request to Nginx as follows : http://host/dir Nothing strange here, this is an usual behaviour. Then Nginx sees that there is trailing slash missing and returns a 301 redirect with following location : http://host/dir/ This behaviour is certainly related to the following directive : try_files $uri $uri/ =404; I'd prefer that Nginx returns a 301 redirect with tls enabled : https://host/dir/ I've currently configured the proxy to set header X-Forwarded-Proto:https but it does not change anything. Could you please point me to the right directive so Nginx follows what is set in X-Forwarded-Proto header ? Maybe I should use 'absolute_redirect off;' ? -- Best regards, Artur. From zchao1995 at gmail.com Fri Jan 25 01:50:16 2019 From: zchao1995 at gmail.com (Zhang Chao) Date: Thu, 24 Jan 2019 20:50:16 -0500 Subject: User directive error In-Reply-To: <5A41E86E-D9AC-4100-9A19-3E8BF7DD1C18@palvelin.fi> References: <5A41E86E-D9AC-4100-9A19-3E8BF7DD1C18@palvelin.fi> Message-ID: Hello! > Why does this error occur (Ubuntu 18.04/nginx 1.14.0)? > > nginx.conf:21 > user www-data; > > error.log > 2019/01/24 19:07:07 [warn] 3526#3526: the "user" directive makes sense only if the master process runs with super- > user privileges, ignored in /etc/nginx/nginx.conf:21 > > # ps -axu |grep nginx > root 3439 0.0 0.2 360156 9352 ? Ss 19:07 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process > on; > www-data 3441 0.1 0.5 364044 20716 ? S 19:07 0:00 nginx: worker process > www-data 3442 0.2 0.5 364560 21248 ? S 19:07 0:00 nginx: worker process > www-data 3443 0.0 0.3 362456 14852 ? S 19:07 0:00 nginx: cache manager process The error message is self explaining, your master process should be run with the superuser privileges then you can specify workers? user. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Jan 25 05:43:53 2019 From: nginx-forum at forum.nginx.org (Roar) Date: Fri, 25 Jan 2019 00:43:53 -0500 Subject: grpc keepalive does not effect, nginx will close connection by the minute In-Reply-To: References: Message-ID: <907239b5005c42bcd527f237b3f1a9b7.NginxMailingListEnglish@forum.nginx.org> It works! Thanks a ton! @Sergey Kandaurov Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282769,282826#msg-282826 From postmaster at palvelin.fi Fri Jan 25 10:42:59 2019 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Fri, 25 Jan 2019 12:42:59 +0200 Subject: User directive error In-Reply-To: References: <5A41E86E-D9AC-4100-9A19-3E8BF7DD1C18@palvelin.fi> Message-ID: <7D678ECF-6F4F-4AA1-B3FD-7F1B7341C1BB@palvelin.fi> > On 25 Jan 2019, at 03:50, Zhang Chao wrote: > > Hello! > > > Why does this error occur (Ubuntu 18.04/nginx 1.14.0)? > > > > nginx.conf:21 > > user www-data; > > > > error.log > > 2019/01/24 19:07:07 [warn] 3526#3526: the "user" directive makes sense only if the master process runs with super- > > user privileges, ignored in /etc/nginx/nginx.conf:21 > > > > # ps -axu |grep nginx > > root 3439 0.0 0.2 360156 9352 ? Ss 19:07 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process > on; > > www-data 3441 0.1 0.5 364044 20716 ? S 19:07 0:00 nginx: worker process > > www-data 3442 0.2 0.5 364560 21248 ? S 19:07 0:00 nginx: worker process > > www-data 3443 0.0 0.3 362456 14852 ? S 19:07 0:00 nginx: cache manager process > > The error message is self explaining, your master process should be run with > the superuser privileges then you can specify workers? user. Can you tell from my ps command output above what privileges my master process is running with now? From nginx-forum at forum.nginx.org Fri Jan 25 15:21:55 2019 From: nginx-forum at forum.nginx.org (gchiesa) Date: Fri, 25 Jan 2019 10:21:55 -0500 Subject: proxy_ssl_session_reuse not working with dynamic proxy_pass Message-ID: <03b339f5fd55824f5c7419e2773a7a1b.NginxMailingListEnglish@forum.nginx.org> Hi, I'm trying to enable the proxy_ssl_session_reuse with dynamic proxy_pass as per the following config. --- server { listen 80; server_name localhost; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1.2; proxy_ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH; proxy_ssl_server_name on; proxy_socket_keepalive on; location / { root /usr/share/nginx/html; index index.html index.htm; } set $upstream_server https://myupstream.com; location /test/ { # forward the request id received in the headers to the upstream proxy_set_header X-Request-Id $http_x_request_id; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host "myupstream.com"; resolver 8.8.8.8; rewrite ^/test/(.*) /$1 break; proxy_pass $upstream_server; # completely disable proxy cache expires off; sendfile off; } } --- but the proxy module does not honor proxy_ssl_session_reuse. Instead if in the case of NOT DYNAMIC resolution it works fine. Example: --- server { listen 80; server_name localhost; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; proxy_ssl_session_reuse on; proxy_ssl_protocols TLSv1.2; proxy_ssl_ciphers EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH; proxy_ssl_server_name on; proxy_socket_keepalive on; location / { root /usr/share/nginx/html; index index.html index.htm; } location /test/ { # forward the request id received in the headers to the upstream proxy_set_header X-Request-Id $http_x_request_id; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_set_header Host "myupstream.com"; resolver 8.8.8.8; rewrite ^/test/(.*) /$1 break; proxy_pass https://myupstream.com; # completely disable proxy cache expires off; sendfile off; } } --- Does anybody have any idea how (if possible) to make the proxy_ssl_session_reuse work with dynamic resolution? Thanks Peppe Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282830,282830#msg-282830 From maxozerov at i-free.com Fri Jan 25 17:03:18 2019 From: maxozerov at i-free.com (Maxim Ozerov) Date: Fri, 25 Jan 2019 17:03:18 +0000 Subject: User directive error In-Reply-To: <7D678ECF-6F4F-4AA1-B3FD-7F1B7341C1BB@palvelin.fi> References: <5A41E86E-D9AC-4100-9A19-3E8BF7DD1C18@palvelin.fi> <7D678ECF-6F4F-4AA1-B3FD-7F1B7341C1BB@palvelin.fi> Message-ID: <23fbe8425d47445c948fed97b02f82fb@srv-exch-mb02.i-free.local> Hm... it doesn't sound believable, but for example, you can restrict the root user with SELinux context ;) > > Hello! > > > Why does this error occur (Ubuntu 18.04/nginx 1.14.0)? > > > > nginx.conf:21 > > user www-data; > > > > error.log > > 2019/01/24 19:07:07 [warn] 3526#3526: the "user" directive makes > > sense only if the master process runs with super- user privileges, > > ignored in /etc/nginx/nginx.conf:21 > > > > # ps -axu |grep nginx > > root 3439 0.0 0.2 360156 9352 ? Ss 19:07 0:00 nginx: master > > process /usr/sbin/nginx -g daemon on; master_process > on; www-data > > 3441 0.1 0.5 364044 20716 ? S 19:07 0:00 nginx: worker process > > www-data 3442 0.2 0.5 364560 21248 ? S 19:07 0:00 nginx: worker > > process www-data 3443 0.0 0.3 362456 14852 ? S 19:07 0:00 nginx: > > cache manager process > > The error message is self explaining, your master process should be > run with the superuser privileges then you can specify workers? user. > > Can you tell from my ps command output above what privileges my master process is running with now? From nginx-forum at forum.nginx.org Fri Jan 25 20:17:34 2019 From: nginx-forum at forum.nginx.org (Joachim) Date: Fri, 25 Jan 2019 15:17:34 -0500 Subject: Multi layer caching Message-ID: <8e430fc0796e9138a279cc86607f4282.NginxMailingListEnglish@forum.nginx.org> Good day, we are trying to optimise our caching strategy on cache servers that are in front of a very large dataset 200 x 1MB/s x 500 days = 1MB/s x 8?640'000'000s = 8.6 PB We serve TV stream segments (HLS) for over 200 channels where users can fetch time shifted content. I.e. segments up to 7 days in the past (replay) or even 18 months in the past (recordings). Segments will be accessed much more rarely the older they get but the actively accessed data set is still massive. Segments never change and expire 18 months after birth. Currently we are limited by the write rates of the SSDs in the cache servers. We can use proxy_cache_min_uses to reduce the write rate, but that also lowers our hit rates. What I would like to set up is a multi-stage cache based on different cache stores: 1) RAM disk (<1TB, very fast) 2) SSD (<10TB, fast writes) 3) HD (~100 TB) Strategy: a) Store every reply / requested segment in the RAM disk based cache. b) If a segment is requested >2 times in 10min/1h store it in the SSD based cache this may require an upstream re-fetch if it was already purged from the RAM disk cache c) If a segment is requested >10 times in 1h store it to the HD this may require an upstream re-fetch if it was already purged from the SSD cache I obviously don?t want to cascade 3 instances of NGINX to pull this off.. Can anyone point me in the right direction? (Config or Code) Thanks a lot in advance, Joachim Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282832,282832#msg-282832 From nginx-forum at forum.nginx.org Fri Jan 25 20:28:42 2019 From: nginx-forum at forum.nginx.org (nenaB) Date: Fri, 25 Jan 2019 15:28:42 -0500 Subject: bypassing and replacing cache if origin changed - or purging cache when origin changed? Message-ID: I have nginx configured to cache certain requests mapped by tenant/host/uri/args - these files wouldn't change often, but if they do, i want nginx to fetch them new. I don't have NginxPlus - and I'm running Nginx docker. Some things that I have investigated, but not sure how I could configure to make it work is: the proxy_cache_bypass and the expires settings. But I'm not sure about whether something like: expires $upstream_http_expires 1d; would work? If it checks upstream every time, it would defeat the purpose of caching. What exactly do I need to configure if I want to test whether for a given cache-key on the upstream server, is there new content? (I can implement an api that could test whether new content is available.) Another option is to implement a fastcgi script that would test the upstream api and purge the cache based on that, but I'm hoping that I can leverage the power of Nginx rather than having an additional script to maintain. Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282833,282833#msg-282833 From postmaster at palvelin.fi Sat Jan 26 15:35:26 2019 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Sat, 26 Jan 2019 17:35:26 +0200 Subject: User directive error In-Reply-To: <23fbe8425d47445c948fed97b02f82fb@srv-exch-mb02.i-free.local> References: <5A41E86E-D9AC-4100-9A19-3E8BF7DD1C18@palvelin.fi> <7D678ECF-6F4F-4AA1-B3FD-7F1B7341C1BB@palvelin.fi> <23fbe8425d47445c948fed97b02f82fb@srv-exch-mb02.i-free.local> Message-ID: <6952F6C3-6BAA-4CCE-BB45-10CE238D2E6B@palvelin.fi> I?m running a standard Ubuntu 18.04, not SELinux. I?m under the assumption that my nginx master process IS being run by root. > On 25 Jan 2019, at 19:03, Maxim Ozerov wrote: > > Hm... it doesn't sound believable, but for example, you can restrict the root user with SELinux context ;) > >> >> Hello! >> >>> Why does this error occur (Ubuntu 18.04/nginx 1.14.0)? >>> >>> nginx.conf:21 >>> user www-data; >>> >>> error.log >>> 2019/01/24 19:07:07 [warn] 3526#3526: the "user" directive makes >>> sense only if the master process runs with super- user privileges, >>> ignored in /etc/nginx/nginx.conf:21 >>> >>> # ps -axu |grep nginx >>> root 3439 0.0 0.2 360156 9352 ? Ss 19:07 0:00 nginx: master >>> process /usr/sbin/nginx -g daemon on; master_process > on; www-data >>> 3441 0.1 0.5 364044 20716 ? S 19:07 0:00 nginx: worker process >>> www-data 3442 0.2 0.5 364560 21248 ? S 19:07 0:00 nginx: worker >>> process www-data 3443 0.0 0.3 362456 14852 ? S 19:07 0:00 nginx: >>> cache manager process >> >> The error message is self explaining, your master process should be >> run with the superuser privileges then you can specify workers? user. >> >> Can you tell from my ps command output above what privileges my master process is running with now? -- Palvelin.fi Hostmaster postmaster at palvelin.fi From collimarco91 at gmail.com Sat Jan 26 21:05:03 2019 From: collimarco91 at gmail.com (Marco Colli) Date: Sat, 26 Jan 2019 22:05:03 +0100 Subject: Rate limiting for try_files Message-ID: Hello! I cannot figure out how to apply a rate limit *only to static files* served with try_files (and not for @app location). Is it possible? Here's my configuration: limit_req_zone $binary_remote_addr zone=mylimit:10m rate=2r/s; server { listen 80; server_name example.com; # serve the static file directly if it exists in the public Rails folder... try_files $uri @app; location @app { # ... otherwise send the request to the Rails application proxy_pass http://app; proxy_redirect off; } } I know that I can use the following: limit_req zone=mylimit burst=50 nodelay; However where should I put that line? If I put it in the "server", then also my @app will use that settings, which is not what I want... I just want to put a limit specific for files served using try_files. Note that static files can be in any location, even the root folder (e.g. /favicon.ico and many others). Any help would be greatly appreciated Thanks Marco Colli -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Sun Jan 27 21:42:01 2019 From: francis at daoine.org (Francis Daly) Date: Sun, 27 Jan 2019 21:42:01 +0000 Subject: Rate limiting for try_files In-Reply-To: References: Message-ID: <20190127214201.smjiaskplafa47dk@daoine.org> On Sat, Jan 26, 2019 at 10:05:03PM +0100, Marco Colli wrote: Hi there, > I cannot figure out how to apply a rate limit *only to static files* served > with try_files (and not for @app location). Is it possible? The request is handled in a location. Only the config in, or inherited in to, that location matters. For try_files files, the location is the one that the try_files is in. > server { > try_files $uri @app; Does it break anything to put that inside "location / {}"? > location @app { > I know that I can use the following: > limit_req zone=mylimit burst=50 nodelay; > > However where should I put that line? In the location that you want it to apply. > If I put it in the "server", then > also my @app will use that settings, which is not what I want... I just > want to put a limit specific for files served using try_files. Note that > static files can be in any location, even the root folder (e.g. > /favicon.ico and many others). Put it in each of those location{}s too. > Any help would be greatly appreciated The other option is to apply the limit_req at server level, so that it will inherit in to all locations{}s that do not have their own limit_req directive. And then add a "no limit" limit_req directive in those location{}s. I think that you can do *that*, by adding something like limit_req_zone "" zone=off:32k rate=7r/m; (where "" is a key that will always be empty and so will not limit anything; "off" is a string to use as the zone name, "32k" is the minimum size allowed on my test machine (8x page size), and the "rate" is something that is odd enough that you will hopefully recognise if if things go wrong, and this *does* ever start limiting anything.) and then adding "limit_req zone=off;" in the appropriate places. f -- Francis Daly francis at daoine.org From info at digitalkube.com Mon Jan 28 16:25:37 2019 From: info at digitalkube.com (Ritesh Saini) Date: Mon, 28 Jan 2019 21:55:37 +0530 Subject: Random 404 Errors Message-ID: <9BB678BE-20BD-4544-9234-10B30FC52ADE@digitalkube.com> Hello, My website https://www.digitalkube.com is returning 404 errors randomly. It?s hosted on a DigitalOcean droplet with nginx server. What could be the possible reason? Please suggest me a fix. Regards, Ritesh Saini -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at lazygranch.com Mon Jan 28 16:47:22 2019 From: lists at lazygranch.com (Gary) Date: Mon, 28 Jan 2019 08:47:22 -0800 Subject: Random 404 Errors In-Reply-To: <9BB678BE-20BD-4544-9234-10B30FC52ADE@digitalkube.com> Message-ID: An HTML attachment was scrubbed... URL: From peter_booth at me.com Mon Jan 28 22:28:21 2019 From: peter_booth at me.com (Peter Booth) Date: Mon, 28 Jan 2019 17:28:21 -0500 Subject: Random 404 Errors In-Reply-To: References: Message-ID: Open this and you will see that a request to https://digitalkube.com/ returns a 301 pointing back to itself. Check your CDN configuration https://redbot.org/?uri=https%3A%2F%2Fdigitalkube.com%2F Sent from my iPhone > On Jan 28, 2019, at 11:47 AM, Gary wrote: > > Log files? Nginx.conf file? You need to provide something to analyze. > > Obviously this has to be 404 failures on resources you actually have. > > I wouldn't rule out file permission issues. > > I run two websites on a DO centos droplet. All my problems are self inflicted. ;-) > > From: info at digitalkube.com > Sent: January 28, 2019 8:25 AM > To: nginx at nginx.org > Reply-to: nginx at nginx.org > Subject: Random 404 Errors > > Hello, > > My website https://www.digitalkube.com is returning 404 errors randomly. It?s hosted on a DigitalOcean droplet with nginx server. What could be the possible reason? > > Please suggest me a fix. > > Regards, > Ritesh Saini > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Jan 29 05:37:59 2019 From: nginx-forum at forum.nginx.org (Pete1103) Date: Tue, 29 Jan 2019 00:37:59 -0500 Subject: http streaming with nginx module Message-ID: Hello all, I tried to make a third-party module in nginx in order to support the ts stream in shared_memory. However, in my nginx handler function, it will output a FILE_SIZE ts steam in once. I want to enable "Transfer Encoding: chunked" in this function, so it can continuously send data to a client over a single HTTP connection that remains open indefinitely. Does the function " ngx_http_output_filter(r, &out)" can achieve this goal? or I should use another function to make it? nginx version: 1.15.7 here is my nginx handler code: static ngx_int_t ngx_http_ts_handler(ngx_http_request_t *r){ ngx_int_t rc; ngx_buf_t *b; ngx_chain_t out; char uri[50] = {0}; static char ngx_ts_data[FILE_SIZE] = {0}; if (!(r->method & (NGX_HTTP_GET | NGX_HTTP_PUT ))) { return NGX_HTTP_NOT_ALLOWED; } rc = ngx_http_discard_request_body(r); if (rc != NGX_OK) return rc; /* Set the Content-Type header. "text/html" */ r->headers_out.content_type.len = sizeof(TS_TYPE) - 1; r->headers_out.content_type.data = (u_char *) TS_TYPE; /* Allocate a new buffer for sending out the reply. */ b = ngx_pcalloc(r->pool, sizeof(ngx_buf_t)); /* Insertion in the buffer chain. */ out.buf = b; out.next = NULL; memcpy(&uri, r-> uri_start , r-> uri_end-(r-> uri_start)); /*make http header*/ if (!strncmp(uri, "/VideoInput/", 12)){ int shm_fd; uint32_t *pos_hdr; int pos; int cur_loc; char *fifo_start; int remain = 0; shm_fd = open_output_shm(SHM_NAME); g_shm_ptr = (char *)mmap(NULL, SHM_SIZE, PROT_READ | PROT_WRITE, MAP_SHARED, shm_fd, 0); close(shm_fd); pos_hdr = (uint32_t *)g_shm_ptr; //chn = (int)(*pos_hdr & CHN_MASK); pos = (int)(*pos_hdr & POS_MASK); if (pos > (PACKET_SIZE * 100)) { cur_loc = pos - (PACKET_SIZE * 100); } else cur_loc = 0; fifo_start = (char *)(pos_hdr + 1); if (fifo_start[cur_loc] != 0x47) { //json_object_set_new(jret, "TS", json_string("non-sync")); memset(ngx_ts_data, 0, strlen(ngx_ts_data)); memcpy(ngx_ts_data,"TS non-sync", 11); b->pos = (u_char *)ngx_ts_data; b->last = (u_char *)ngx_ts_data + strlen(ngx_ts_data); }else{ if( pos > FILE_SIZE ){ b->pos = (u_char *)(g_shm_ptr + 4); b->last = (u_char *)(g_shm_ptr + 4) + FILE_SIZE; }else{ remain = FILE_SIZE - pos; memset(ngx_ts_data, 0, sizeof(ngx_ts_data)-1); memcpy(ngx_ts_data, g_shm_ptr + (SHM_SIZE - remain), remain); memcpy(&ngx_ts_data[remain], g_shm_ptr + 4, pos); b->pos = (u_char *)ngx_ts_data; b->last = (u_char *)ngx_ts_data + sizeof(ngx_ts_data)-1; } } }else{ //json_object_set_new(jret, "TS", json_string("Nothing")); memset(ngx_ts_data, 0, strlen(ngx_ts_data)); memcpy(ngx_ts_data,"No",2); b->pos = (u_char *)ngx_ts_data; b->last = (u_char *)ngx_ts_data + strlen(ngx_ts_data); } b->memory = 1; /* content is in read-only memory */ /* there will be no more buffers in the request */ b->last_buf = 1; /* Sending the headers for the reply. */ r->headers_out.status = NGX_HTTP_OK; /* 200 status code */ rc = ngx_http_send_header(r); /* Send the headers */ if (rc == NGX_ERROR || rc > NGX_OK || r->header_only) { return rc; } /* Send the body, and return the status code of the output filter chain. */ return ngx_http_output_filter(r, &out); } Any help would be appreciated, Pete Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282849,282849#msg-282849 From thresh at nginx.com Wed Jan 30 10:14:51 2019 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 30 Jan 2019 13:14:51 +0300 Subject: Linux packages: Alpine Linux added Message-ID: <3fd90bbb-1c77-2aca-ab46-3b699cee1235@nginx.com> Hello, As a part of our continued effort to bring nginx to new platforms and operating systems, I'd like to inform that nginx.org prebuilt binary packages are now available for Alpine Linux. You can read more about setting the repository up and installing the packages on https://nginx.org/en/linux_packages.html#Alpine. The packaging source code is available under http://hg.nginx.org/pkg-oss/file/default/alpine. Enjoy, -- Konstantin Pavlov https://www.nginx.com/ From postmaster at hubject.net Wed Jan 30 10:58:15 2019 From: postmaster at hubject.net (Postmaster) Date: Wed, 30 Jan 2019 11:58:15 +0100 Subject: unsuscribe In-Reply-To: <3fd90bbb-1c77-2aca-ab46-3b699cee1235@nginx.com> References: <3fd90bbb-1c77-2aca-ab46-3b699cee1235@nginx.com> Message-ID: <3B3BABC8-F938-400D-B7F6-C47260783DB3@hubject.net> From nginx-forum at forum.nginx.org Wed Jan 30 21:58:29 2019 From: nginx-forum at forum.nginx.org (brianv0) Date: Wed, 30 Jan 2019 16:58:29 -0500 Subject: Multiple Set-Cookie headers with auth_request Message-ID: Hi, I've run into an issue where a cookie I'm sending back to a browser is too large (> 4kB), so it's being split into two cookies. This results in two `Set-Cookie` headers. Unfortunately, it seems that `auth_request_set` doesn't work well with this, because you only get the first header. Is there a way around this? I saw the var `$upstream_cookie_[name]`, which might work (though awkwardly - you'd need to set use that once for every possible header part), and the other idea I had was maybe using an `access_by_lua` script to try to fix this in some way, but I'm not sure how I'd go about that either. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282868,282868#msg-282868 From alec.muffett at gmail.com Thu Jan 31 21:06:10 2019 From: alec.muffett at gmail.com (Alec Muffett) Date: Thu, 31 Jan 2019 21:06:10 +0000 Subject: Matching & Acting upon Headers received from Upstream with proxy_pass Message-ID: Hello All! I'm running a reverse proxy and I want to trap when upstream is sending me: Content-Encoding: gzip ...and on those occasions return (probably) 406 downstream to the client; the reason for this is that I am always using: proxy_set_header Accept-Encoding "identity"; ...so the upstream should *never* send me gzip/etc; but sometimes it does so because of errors with CDN configuration and "Vary:" headers, and that kind of thing. I would like to make the situation more obvious and easier to detect. I have been trying solutions like: if ( $upstream_http_content_encoding ~ /gzip/ ) { return 406; } and: map $upstream_http_content_encoding $badness { br 1; compress 1; deflate 1; gzip 1; identity 0; default 0; } ... server { ... if ($badness) { return 406; } ...but nothing is working like I had hoped, I suspect because I do not know if/where to place the if-statement such that the upstream_http_content_encoding is both set and valid during an appropriate processing phase. The most annoying thing is that I can see that the upstream_http_content_encoding variable is set to "gzip", because if I do: more_set_headers "Foo: /$upstream_http_content_encoding/"; ...then I can see the "Foo: /gzip/" value on the client; but that does not help me do what I want. Can anyone suggest a route forward, please? -a -- http://dropsafe.crypticide.com/aboutalecm -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Jan 31 22:25:00 2019 From: nginx-forum at forum.nginx.org (c0nw0nk) Date: Thu, 31 Jan 2019 17:25:00 -0500 Subject: Nginx access log query string params per line. Message-ID: So with the following. logformat qs "$remote_addr $args"; server { server_name NAME; access_log /path/to/log qs; location / { root /path/to/root; } } If i go to url /index.php?query1=param1&query2=param2 The access.log file shows query1=param1&query2=param2 All on the same line isit possible to split these up onto diifferent lines. Example. query1=param1 query2=param2 etc etc I just wanted to see individual query params not all clustered together in one line where it is a bit unreadable. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,282880,282880#msg-282880