From eylonsa at securingsam.com Sun Sep 2 14:01:19 2018 From: eylonsa at securingsam.com (Eylon Saadon) Date: Sun, 2 Sep 2018 17:01:19 +0300 Subject: mirror delay In-Reply-To: <20180830142801.GG43355@Romans-MacBook-Air.local> References: <20180830135227.GF43355@Romans-MacBook-Air.local> <20180830142801.GG43355@Romans-MacBook-Air.local> Message-ID: HI, when adding keepalive_timeout 0; to the main location it works fine. even if the mirrored location doesn't respond immediately the latency doesn't go up. is this the solution for the issue or just a way to understand the issue? Thanks, Eylon Saadon On Thu, Aug 30, 2018 at 5:28 PM Roman Arutyunyan wrote: > Hi, > > On Thu, Aug 30, 2018 at 05:19:53PM +0300, Eylon Saadon wrote: > > hi, > > thanks for the quick response! > > I'm not using sendfile or tcp_nopush. > > just to make sure. I should disable the keepalive for the mirror > location. > > and do it like so? > > No, for the primary location. This will help us understand the reason > why you have the delay. > > > server { > > > > resolver 8.8.8.8; > > > > listen 80; > > > > location / { > > proxy_set_header Host $host; > > proxy_pass http://server:9000; > > } > > > > location /mirror { > > internal; > > keepalive_timeout 0; > > proxy_pass https://test_server$request_uri; > > } > > } > > > > > > Thanks, > > > > eylon saadon > > > > > > On Thu, Aug 30, 2018 at 4:52 PM Roman Arutyunyan wrote: > > > > > Hi, > > > > > > On Thu, Aug 30, 2018 at 04:34:29PM +0300, Eylon Saadon wrote: > > > > Hi, > > > > I'm using the mirror module in my "production" nginx in order to > mirror > > > > real traffic to a test envrionment. > > > > I don't want this mirroring to affect the latency of the production > > > > environment, but it looks like the nginx is waiting for the response > from > > > > the test environment. > > > > is there a way to avoid this? I just want the request to get to the > test > > > > environment and let it process it. but it shouldn't wait fo r the > > > response > > > > from the test environment in order to respond to the request > > > > > > Usually a mirror subrequest does not affect the main request. However > > > there > > > are two issues with mirroring: > > > > > > - the next request on the same connection will not be processed until > all > > > mirror subrequests finish. Try disabling keepalive and see if it > helps. > > > > > > - if you use sendfile and tcp_nopush, it's possible that the response > is > > > not > > > pushed properly because of a mirror subrequest, which may result in a > > > delay. > > > Turn off sendfile and see if it helps. > > > > > > -- > > > Roman Arutyunyan > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > -- > > Thanks, > > Eylon Saadon > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Thanks, Eylon Saadon -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan at wikiloc.com Mon Sep 3 06:13:11 2018 From: ivan at wikiloc.com (Ivan Bianchi) Date: Mon, 3 Sep 2018 08:13:11 +0200 Subject: Rewrite with number after hyphen Message-ID: Hi, I detected an issue with my rewrite rule in the nginx.conf and I don't understand why it happens and how to fix it. I have tested in two environments with versions 1.10.3 and 1.14.0. Having the following simple conf with a regex is intended to get everything: > location /foo { > rewrite /foo/(.*) /web/foo.do?a=$1 last; > } OK: > https://www.test.com/foo/asdf https://www.test.com/foo/asdf-asdf https://www.test.com/foo/asdf12 https://www.test.com/foo/asdf12-asdf https://www.test.com/foo/12 https://www.test.com/foo/-12 KO: > https://www.test.com/foo/asdf-12 https://www.test.com/foo/asdf-12-asdf As implementing pcre regex, this regex works in all cases in the common regex online sites but not in nginx. Why if I put a number after a hyphen the regex stops working? Many thanks, -- Ivan Bianchi Wikiloc -------------- next part -------------- An HTML attachment was scrubbed... URL: From arut at nginx.com Mon Sep 3 11:03:23 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Mon, 3 Sep 2018 14:03:23 +0300 Subject: mirror delay In-Reply-To: References: <20180830135227.GF43355@Romans-MacBook-Air.local> <20180830142801.GG43355@Romans-MacBook-Air.local> Message-ID: <20180903110323.GL43355@Romans-MacBook-Air.local> Hi, On Sun, Sep 02, 2018 at 05:01:19PM +0300, Eylon Saadon wrote: > HI, > when adding > keepalive_timeout 0; > to the main location it works fine. > even if the mirrored location doesn't respond immediately the latency > doesn't go up. > > is this the solution for the issue or just a way to understand the issue? If you are ok with no keepalive for your clients, then you can use this configuration. Delayed processing of the next request is a known side-effect of how mirroring is implemented in nginx, and this is unlikely to change. The point was to make sure this was actually the case. > Thanks, > Eylon Saadon > > On Thu, Aug 30, 2018 at 5:28 PM Roman Arutyunyan wrote: > > > Hi, > > > > On Thu, Aug 30, 2018 at 05:19:53PM +0300, Eylon Saadon wrote: > > > hi, > > > thanks for the quick response! > > > I'm not using sendfile or tcp_nopush. > > > just to make sure. I should disable the keepalive for the mirror > > location. > > > and do it like so? > > > > No, for the primary location. This will help us understand the reason > > why you have the delay. > > > > > server { > > > > > > resolver 8.8.8.8; > > > > > > listen 80; > > > > > > location / { > > > proxy_set_header Host $host; > > > proxy_pass http://server:9000; > > > } > > > > > > location /mirror { > > > internal; > > > keepalive_timeout 0; > > > proxy_pass https://test_server$request_uri; > > > } > > > } > > > > > > > > > Thanks, > > > > > > eylon saadon > > > > > > > > > On Thu, Aug 30, 2018 at 4:52 PM Roman Arutyunyan wrote: > > > > > > > Hi, > > > > > > > > On Thu, Aug 30, 2018 at 04:34:29PM +0300, Eylon Saadon wrote: > > > > > Hi, > > > > > I'm using the mirror module in my "production" nginx in order to > > mirror > > > > > real traffic to a test envrionment. > > > > > I don't want this mirroring to affect the latency of the production > > > > > environment, but it looks like the nginx is waiting for the response > > from > > > > > the test environment. > > > > > is there a way to avoid this? I just want the request to get to the > > test > > > > > environment and let it process it. but it shouldn't wait fo r the > > > > response > > > > > from the test environment in order to respond to the request > > > > > > > > Usually a mirror subrequest does not affect the main request. However > > > > there > > > > are two issues with mirroring: > > > > > > > > - the next request on the same connection will not be processed until > > all > > > > mirror subrequests finish. Try disabling keepalive and see if it > > helps. > > > > > > > > - if you use sendfile and tcp_nopush, it's possible that the response > > is > > > > not > > > > pushed properly because of a mirror subrequest, which may result in a > > > > delay. > > > > Turn off sendfile and see if it helps. > > > > > > > > -- > > > > Roman Arutyunyan > > > > _______________________________________________ > > > > nginx mailing list > > > > nginx at nginx.org > > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > > > > > > > > -- > > > Thanks, > > > Eylon Saadon > > > > > _______________________________________________ > > > nginx mailing list > > > nginx at nginx.org > > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > > -- > > Roman Arutyunyan > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > Thanks, > Eylon Saadon > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From francis at daoine.org Mon Sep 3 12:36:30 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 3 Sep 2018 13:36:30 +0100 Subject: Rewrite with number after hyphen In-Reply-To: References: Message-ID: <20180903123630.GH3537@daoine.org> On Mon, Sep 03, 2018 at 08:13:11AM +0200, Ivan Bianchi wrote: Hi there, > > location /foo { > > rewrite /foo/(.*) /web/foo.do?a=$1 last; > > } This seems to work as expected for me, using nginx/1.14.0. > KO: > > > https://www.test.com/foo/asdf-12 Why do you think it does not work? What is the input/output/expected output? For example, if you add the new location location = /web/foo.do { return 200 "$uri$is_args$args\n"; } and repeat the tests, do you see any difference in output? > Why if I put a number after a hyphen the regex stops working? My guesses are: * you have another location{} that you have configured to match those requests, so your shown location{} is not involved or * your /web/foo.do location-handler handles those requests differently. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Mon Sep 3 13:26:42 2018 From: nginx-forum at forum.nginx.org (prajos) Date: Mon, 03 Sep 2018 09:26:42 -0400 Subject: add checksum to nginx log entries Message-ID: Hi, I'm wondering if there is a ready way to add a checksum (e.g. CRC) to the end of each log entry before they get written to the "access" or "error" log files? One of the project I work on wants each log line to have its own checksum for some integrity checks. Any hint on how I can implement these would be of great help. Thanks, Cheers, Prasanna Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281078,281078#msg-281078 From nginx-forum at forum.nginx.org Mon Sep 3 20:07:49 2018 From: nginx-forum at forum.nginx.org (c0mputerking) Date: Mon, 03 Sep 2018 16:07:49 -0400 Subject: reverse proxy multiple subdomains problems Message-ID: <44e2679fbb64c53e98653e5e0bc5611f.NginxMailingListEnglish@forum.nginx.org> I am trying to do a redirect from http and a reverse proxy to my apache web server. I would like to include several subdomains, they all have dns records and apache virtual hosts setup on the other end. However no matter which of the 3 subdomains i try i always end up at https://my-site.com this is fine for www.my-site.com but recipes.my-site.com is a supposed to be a different website all together. I am new with nginx and have a hunch that it may have something to do with $server_name$request_uri not being the right option in my case but i'm not sure see config below server { listen 172.16.0.10:80; server_name my-site.com www.my-site.com recipes.my-site.com; return 301 https://$server_name$request_uri; } server { listen 172.16.0.10:443 ssl; server_name my-site.com www.my-site.com recipes.my-site.com; access_log /var/log/nginx/van-ginneken.com-access.log; ssl_certificate /root/SYNC-certs/van-ginneken.com/fullchain.pem; ssl_certificate_key /root/SYNC-certs/van-ginneken.com/privkey.pem; set $upstream 172.16.0.13; location / { proxy_pass_header Authorization; proxy_pass https://$upstream; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_http_version 1.1; proxy_set_header Connection ""; proxy_buffering off; client_max_body_size 0; proxy_read_timeout 36000s; proxy_redirect off; proxy_ssl_session_reuse off; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281086,281086#msg-281086 From nginx-forum at forum.nginx.org Mon Sep 3 20:13:51 2018 From: nginx-forum at forum.nginx.org (petecooper) Date: Mon, 03 Sep 2018 16:13:51 -0400 Subject: Set `expires` by MIME type Message-ID: Hello. I am attempting to use `expires` on Nginx 1.15.3 to define the expiry of files on a per MIME type basis. I have used [1] as a base, and constructed the following `map` in the `http` section of a `include`-d `server` block (domain sanitised): map $sent_http_content_type $www_example_com_expires { default 1M; application/atom+xml 1h; application/javascript 1M; application/json 0s; application/ld+json 0s; application/manifest+json 1w; application/rdf+xml 1h; application/rss+xml 1h; application/schema+json 0s; application/x-javascript 1M; application/xml 0s; font/woff 1M; image/gif 1M; image/jpeg 1M; image/png 1M; image/svg+xml 1M; image/vnd.microsoft.icon 1M; image/webp 1M; image/x-icon 1M; text/cache-manifest 0s; text/css 1M; text/html 0s; text/javascript 1M; text/x-cross-domain-policy 1w; text/xml 0s; video/mp4 1M; video/webm 1M; } Later on, after the `map` is defined, I call it using `expires` in a `server` block: server {#IPv4 and IPv6, https, PHP fastcgi, check https://cipherli.st for latest ciphers access_log /var/log/nginx/www.example.com.access.log ipscrubbed; add_header Access-Control-Allow-Origin "https://*.example.com"; add_header Content-Security-Policy "default-src 'self'; connect-src 'self' https://api.github.com; font-src 'self'; img-src 'self' data: * https://*; media-src 'self' * https://*; style-src 'self' 'unsafe-in$ add_header Expect-CT "max-age=0; report-uri=https://example.com/expect-ct-report"; add_header Feature-Policy "camera 'self'; geolocation 'none'; microphone 'none'; midi 'none'; payment 'none'"; add_header Referrer-Policy strict-origin; add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload"; add_header X-Content-Type-Options nosniff; add_header X-Frame-Options DENY; add_header X-XSS-Protection "1; mode=block"; error_log /var/log/nginx/www.example.com.error.log crit; etag off; expires $www_example_com_expires; index index.html index.php; listen [::]:443 http2 ssl; listen 443 http2 ssl; [...] } My config passes the `nginx -t` self-test with no errors, and I can restart Nginx without issue. In the browser inspector, all MIME types are assigned a 1 month expiry, as if they're inheriting the `default` value from the map. Example headers for a .php file: Date: Mon, 03 Sep 2018 20:09:30 GMT Expires: Wed, 03 Oct 2018 20:09:30 GMT If I remove the `expires` directive, the 'Expires:' header is not shown, so `expires` is doing *something*. I suspect my syntax is wrong, and I would be very grateful for any feedback -- I am particularly interested a clue or pointer to aid my research into why this is not working. Thank you for your attention and interest. [1] http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281087,281087#msg-281087 From francis at daoine.org Tue Sep 4 07:07:35 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Sep 2018 08:07:35 +0100 Subject: reverse proxy multiple subdomains problems In-Reply-To: <44e2679fbb64c53e98653e5e0bc5611f.NginxMailingListEnglish@forum.nginx.org> References: <44e2679fbb64c53e98653e5e0bc5611f.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180904070735.GI3537@daoine.org> On Mon, Sep 03, 2018 at 04:07:49PM -0400, c0mputerking wrote: Hi there, > I am new with nginx and have a hunch that it may have something to do with > $server_name$request_uri not being the right option in my case but i'm not > sure see config below Correct. You probably want $host instead of $server_name. http://nginx.org/r/$server_name http://nginx.org/r/$host f -- Francis Daly francis at daoine.org From francis at daoine.org Tue Sep 4 07:21:10 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Sep 2018 08:21:10 +0100 Subject: add checksum to nginx log entries In-Reply-To: References: Message-ID: <20180904072110.GJ3537@daoine.org> On Mon, Sep 03, 2018 at 09:26:42AM -0400, prajos wrote: Hi there, > I'm wondering if there is a ready way to add a checksum (e.g. CRC) to the > end of each log entry before they get written to the "access" or "error" log > files? I believe that stock nginx does not include a way to do that. > One of the project I work on wants each log line to have its own > checksum for some integrity checks. What kind of corruption is the checksum intended to protect against? The answer to that might help determine a suitable design for a solution for you. > Any hint on how I can implement these would be of great help. If it is acceptable to be outside nginx, you could have something that post-processed the logs to add whatever marks you want. Or maybe you could write nginx logs to a named pipe, and have another process do whatever you want before writing to "real" disk. Good luck with it, f -- Francis Daly francis at daoine.org From r1ch+nginx at teamliquid.net Tue Sep 4 11:08:15 2018 From: r1ch+nginx at teamliquid.net (Richard Stanway) Date: Tue, 4 Sep 2018 13:08:15 +0200 Subject: Set `expires` by MIME type In-Reply-To: References: Message-ID: I recently implemented something similar, and one issue I ran in to was that $sent_http_content_type doesn't always map to a mime type. For example, "Content-Type: text/html" would match mime type text/html, but "Content-Type: text/html; charset=utf-8" would match only the default. You need to all Content-Type header variants to your map for it to work as expected, or use a regular expression. On Mon, Sep 3, 2018 at 10:13 PM petecooper wrote: > Hello. > I am attempting to use `expires` on Nginx 1.15.3 to define the expiry of > files on a per MIME type basis. > > I have used [1] as a base, and constructed the following `map` in the > `http` > section of a `include`-d `server` block (domain sanitised): > > map $sent_http_content_type $www_example_com_expires { > default 1M; > application/atom+xml 1h; > application/javascript 1M; > application/json 0s; > application/ld+json 0s; > application/manifest+json 1w; > application/rdf+xml 1h; > application/rss+xml 1h; > application/schema+json 0s; > application/x-javascript 1M; > application/xml 0s; > font/woff 1M; > image/gif 1M; > image/jpeg 1M; > image/png 1M; > image/svg+xml 1M; > image/vnd.microsoft.icon 1M; > image/webp 1M; > image/x-icon 1M; > text/cache-manifest 0s; > text/css 1M; > text/html 0s; > text/javascript 1M; > text/x-cross-domain-policy 1w; > text/xml 0s; > video/mp4 1M; > video/webm 1M; > } > > Later on, after the `map` is defined, I call it using `expires` in a > `server` block: > > server {#IPv4 and IPv6, https, PHP fastcgi, check https://cipherli.st > for latest ciphers > access_log /var/log/nginx/www.example.com.access.log ipscrubbed; > add_header Access-Control-Allow-Origin "https://*.example.com"; > add_header Content-Security-Policy "default-src 'self'; connect-src > 'self' https://api.github.com; font-src 'self'; img-src 'self' data: * > https://*; media-src 'self' * https://*; style-src 'self' 'unsafe-in$ > add_header Expect-CT "max-age=0; > report-uri=https://example.com/expect-ct-report"; > add_header Feature-Policy "camera 'self'; geolocation 'none'; > microphone 'none'; midi 'none'; payment 'none'"; > add_header Referrer-Policy strict-origin; > add_header Strict-Transport-Security "max-age=15768000; > includeSubDomains; preload"; > add_header X-Content-Type-Options nosniff; > add_header X-Frame-Options DENY; > add_header X-XSS-Protection "1; mode=block"; > error_log /var/log/nginx/www.example.com.error.log crit; > etag off; > expires $www_example_com_expires; > index index.html index.php; > listen [::]:443 http2 ssl; > listen 443 http2 ssl; > [...] > } > > My config passes the `nginx -t` self-test with no errors, and I can restart > Nginx without issue. > > In the browser inspector, all MIME types are assigned a 1 month expiry, as > if they're inheriting the `default` value from the map. Example headers for > a .php file: > > Date: Mon, 03 Sep 2018 20:09:30 GMT > Expires: Wed, 03 Oct 2018 20:09:30 GMT > > If I remove the `expires` directive, the 'Expires:' header is not shown, so > `expires` is doing *something*. > > I suspect my syntax is wrong, and I would be very grateful for any feedback > -- I am particularly interested a clue or pointer to aid my research into > why this is not working. > > Thank you for your attention and interest. > > [1] http://nginx.org/en/docs/http/ngx_http_headers_module.html#expires > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,281087,281087#msg-281087 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Tue Sep 4 12:20:00 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Sep 2018 13:20:00 +0100 Subject: Set `expires` by MIME type In-Reply-To: References: Message-ID: <20180904122000.GK3537@daoine.org> On Mon, Sep 03, 2018 at 04:13:51PM -0400, petecooper wrote: Hi there, > I am attempting to use `expires` on Nginx 1.15.3 to define the expiry of > files on a per MIME type basis. It seems to work for me: "xml" should have 0s, so now. "rss" should have 1h. "png" should have 1M. $ curl -s -i http://127.0.0.1/a.xml | grep '^Content-Type\|^Expires' Content-Type: text/xml Expires: Tue, 04 Sep 2018 12:16:40 GMT $ curl -s -i http://127.0.0.1/a.rss | grep '^Content-Type\|^Expires' Content-Type: application/rss+xml Expires: Tue, 04 Sep 2018 13:16:41 GMT $ curl -s -i http://127.0.0.1/a.png | grep '^Content-Type\|^Expires' Content-Type: image/png Expires: Thu, 04 Oct 2018 12:16:42 GMT > In the browser inspector, all MIME types are assigned a 1 month expiry, as > if they're inheriting the `default` value from the map. Example headers for > a .php file: > > Date: Mon, 03 Sep 2018 20:09:30 GMT > Expires: Wed, 03 Oct 2018 20:09:30 GMT Can you do a test like the above, and show the Content-Type that is received as well? "A .php file" could be anything. Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 4 13:41:44 2018 From: nginx-forum at forum.nginx.org (petecooper) Date: Tue, 04 Sep 2018 09:41:44 -0400 Subject: Set `expires` by MIME type In-Reply-To: <20180904122000.GK3537@daoine.org> References: <20180904122000.GK3537@daoine.org> Message-ID: Francis Daly Wrote: ------------------------------------------------------- > It seems to work for me: > > "xml" should have 0s, so now. > "rss" should have 1h. > "png" should have 1M. > > $ curl -s -i http://127.0.0.1/a.xml | grep '^Content-Type\|^Expires' > Content-Type: text/xml > Expires: Tue, 04 Sep 2018 12:16:40 GMT > > $ curl -s -i http://127.0.0.1/a.rss | grep '^Content-Type\|^Expires' > Content-Type: application/rss+xml > Expires: Tue, 04 Sep 2018 13:16:41 GMT > > $ curl -s -i http://127.0.0.1/a.png | grep '^Content-Type\|^Expires' > Content-Type: image/png > Expires: Thu, 04 Oct 2018 12:16:42 GMT Hello Francis - thank you very much for your sanity check, greatly appreciated. I stripped down and rebuilt the `map` this morning and now I'm seeing successful caching values. I am not certain what changed, and a before-and-after diff shows nothing of value, so I must put it down to user error. > Can you do a test like the above, and show the Content-Type that is > received as well? I confirm it's working as expected now, Content-Type and Expires is received correctly. Thank you for your time and attention. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281087,281094#msg-281094 From francis at daoine.org Tue Sep 4 15:26:40 2018 From: francis at daoine.org (Francis Daly) Date: Tue, 4 Sep 2018 16:26:40 +0100 Subject: Set `expires` by MIME type In-Reply-To: References: <20180904122000.GK3537@daoine.org> Message-ID: <20180904152640.GL3537@daoine.org> On Tue, Sep 04, 2018 at 09:41:44AM -0400, petecooper wrote: Hi there, > I confirm it's working as expected now, Content-Type and Expires is received > correctly. Great that you have it working for you now. Cheers, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Tue Sep 4 16:44:31 2018 From: nginx-forum at forum.nginx.org (systrex) Date: Tue, 04 Sep 2018 12:44:31 -0400 Subject: NGINX Logs balancerWorkerName but NOT balancerName Message-ID: <627a5773cc300b58eaa8defbecfafefd.NginxMailingListEnglish@forum.nginx.org> Hello All, I have a situation where NGINX appears to be logging the balancerWorkerName but NOT the balancerName... The requests are a 404, the balancerName exists. Any idea why this would be happening? how can you serve a request out of a balancerWorker WITHOUT the balancerName cluster? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281097,281097#msg-281097 From jsbilgi at yahoo.com Tue Sep 4 18:09:34 2018 From: jsbilgi at yahoo.com (Jagannath Bilgi) Date: Tue, 4 Sep 2018 18:09:34 +0000 (UTC) Subject: Reverse proxy References: <1092622044.966485.1536084574274.ref@mail.yahoo.com> Message-ID: <1092622044.966485.1536084574274@mail.yahoo.com> Hi All, New to nginx and reverse proxies. Trying to setup reserves proxy using nginx and docker. Have defined upstream in nginx config and created dependency in docker-compose as well. However?getting time out error.? Attached docker-compose and nginx.conf file for reference. Below is the error message.? Note: However able to get the pages using below links?http://178.128.159.51:81,?http://178.128.159.51:82,?http://178.128.159.51:83,?http://178.128.159.51:100/login?2018/09/02 13:29:20 [error] 10#10: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 103.252.27.57, server: , request: "GET / HTTP/1.1", upstream: "http://178.128.159.51:81/", host: "178.128.159.51"2018/09/02 13:30:21 [error] 10#10: *1 upstream timed out (110: Connection timed out) while connecting to upstream, client: 103.252.27.57, server: , request: "GET /favicon.ico HTTP/1.1", upstream: "http://178.128.159.51:81/favicon.ico", host: "178.128.159.51", referrer: "http://178.128.159.51/" 2018/09/02 13:30:50 [error] 12#12: *6 upstream timed out (110: Connection timed out) while connecting to upstream, client: 103.252.27.57, server: , request: "GET /topic HTTP/1.1", upstream: "http://178.128.159.51:82/", host: "178.128.159.51" 2018/09/02 13:32:07 [error] 7#7: *9 upstream timed out (110: Connection timed out) while connecting to upstream, client: 103.252.27.57, server: , request: "GET /wb HTTP/1.1", upstream: "http://178.128.159.51:100/login", host: "178.128.159.51" 2018/09/02 13:35:21 [error] 8#8: *14 upstream timed out (110: Connection timed out) while connecting to upstream, client: 103.252.27.57, server: , request: "GET /admin HTTP/1.1", upstream: "http://178.128.159.51:83/", host: "178.128.159.51" Please advise what i am missing. Thanks and regards Jagannath S Bilgi -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: nginx.conf Type: application/octet-stream Size: 1774 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: docker-compose.yml Type: application/octet-stream Size: 1042 bytes Desc: not available URL: From nginx-forum at forum.nginx.org Tue Sep 4 18:30:18 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Tue, 04 Sep 2018 14:30:18 -0400 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate Message-ID: <1eeee126f25389d8beb15e92e7ffc3e6.NginxMailingListEnglish@forum.nginx.org> I have a VPS on Digital Ocean with Ubuntu 18.04, Nginx, Gunicorn, Django, and a test web application, all configured (ufw) to work with http: 80. Everything works perfectly. Tutorial: https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04#configure-nginx-to-proxy-pass-to-gunicorn Now I modify the file /sites-available/LibrosWeb to allow SSL traffic with a self-signed certificate, since I do not have a domain. Tutorial: https://www.digitalocean.com/community/tutorials/how-to-create-a-self-signed-ssl-certificate-for-nginx-in-ubuntu-18-04 Result "Error 502 Bad Gateway". This is the initial code that works well with http: 80: server{ #Configuracion http listen 80; listen [::]:80; server_name 15.15.15.15; location = /favicon.ico { access_log off; log_not_found off; } location /robots.txt { alias /var/www/LibrosWeb/robots.txt ; } location /static/ { root /home/gela/LibrosWeb; } location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } } And this is the code to allow SSL (error 502): server{ #Configuracion SSL listen 443 ssl http2; listen [::]:443 ssl http2; server_name 15.15.15.15; include snippets/self-signed.conf; include snippets/ssl-params.conf; location = /favicon.ico { access_log off; log_not_found off; } location /robots.txt { alias /var/www/LibrosWeb/robots.txt ; } location /static/ { root /home/gela/LibrosWeb; } location / { include proxy_params; proxy_pass https://unix:/run/gunicorn.sock; } } server{ #Configuracion http listen 80; listen [::]:80; server_name 15.15.15.15; return 302 https://15.15.15.15$request_uri; } UFW configured as: 80,443/tcp (Nginx Full) ALLOW IN Anywhere 80,443/tcp (Nginx Full (v6)) ALLOW IN Anywhere (v6) The files /etc/nginx/snippets/self-signed.conf and /etc/nginx/snippets/ssl-params.conf are the same as those in the tutorial. I've been testing configurations for two days and the most I could get is that I work halfway, that is, I can show the default page of django but not the one of my application, if I put the code like this: server{ #Configuracion http listen 80; listen [::]:80; server_name 15.15.15.15; return 302 https://15.15.15.15$request_uri; location = /favicon.ico { access_log off; log_not_found off; } location /robots.txt { alias /var/www/LibrosWeb/robots.txt ; } location /static/ { root /home/gela/LibrosWeb; } } server{ #Configuracion SSL listen 443 ssl http2; listen [::]:443 ssl http2; server_name 15.15.15.15; include snippets/self-signed.conf; include snippets/ssl-params.conf; location / { include proxy_params; proxy_pass https://unix:/run/gunicorn.sock; } } What is wrong, or what is missing? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281099,281099#msg-281099 From zchao1995 at gmail.com Wed Sep 5 02:27:14 2018 From: zchao1995 at gmail.com (tokers) Date: Tue, 4 Sep 2018 19:27:14 -0700 Subject: Reverse proxy In-Reply-To: <1092622044.966485.1536084574274@mail.yahoo.com> References: <1092622044.966485.1536084574274.ref@mail.yahoo.com> <1092622044.966485.1536084574274@mail.yahoo.com> Message-ID: Hello! Have you tried to detect the network interconnection between these dockers? The default proxy connect timeout is 60s, it?s large enough, this problem should not be issued by Nginx itself. Best Regards Alex Zhang https://github.com/tokers -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 5 06:39:26 2018 From: nginx-forum at forum.nginx.org (hhtjim) Date: Wed, 05 Sep 2018 02:39:26 -0400 Subject: if( variable exists ) In-Reply-To: <201204032056.30563.ne@vbart.ru> References: <201204032056.30563.ne@vbart.ru> Message-ID: $arg_proxy ?aaa.com?proxy ``` if ($args ~ '(&|^)proxy([&=]|$)' ) { #exists set $port '8080'; } ``` Posted at Nginx Forum: https://forum.nginx.org/read.php?2,224860,281102#msg-281102 From peter.volkov at gmail.com Wed Sep 5 06:58:54 2018 From: peter.volkov at gmail.com (Peter Volkov) Date: Wed, 5 Sep 2018 09:58:54 +0300 Subject: nginx sends 301 redirect for alias in location Message-ID: Hi. Could you, please, explain. Why nginx sends 301 redirect for the following vhost: server { listen 80; server_name test.domain.tv ; access_log off; location = /test/README.txt { alias /var/www/; } } Here is redirect: $ http http://test.domain.tv/test/README.txt HTTP/1.1 301 Moved Permanently Connection: keep-alive Content-Length: 178 Content-Type: text/html Date: Wed, 05 Sep 2018 06:55:27 GMT Keep-Alive: timeout=20 Location: http://test.domain.tv/test/README.txt/ Server: nginx 301 Moved Permanently

301 Moved Permanently


nginx
-- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 5 12:25:37 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 5 Sep 2018 15:25:37 +0300 Subject: nginx sends 301 redirect for alias in location In-Reply-To: References: Message-ID: <20180905122537.GR56558@mdounin.ru> Hello! On Wed, Sep 05, 2018 at 09:58:54AM +0300, Peter Volkov wrote: > Hi. Could you, please, explain. Why nginx sends 301 redirect for the > following vhost: > > server { > listen 80; > server_name test.domain.tv ; > > access_log off; > > location = /test/README.txt { > alias /var/www/; > } > } > > Here is redirect: > > $ http http://test.domain.tv/test/README.txt > HTTP/1.1 301 Moved Permanently > Connection: keep-alive > Content-Length: 178 > Content-Type: text/html > Date: Wed, 05 Sep 2018 06:55:27 GMT > Keep-Alive: timeout=20 > Location: http://test.domain.tv/test/README.txt/ > Server: nginx > > > 301 Moved Permanently > >

301 Moved Permanently

>
nginx
> > You've aliased "/test/README.txt" into a directory "/var/www/". Since the URI "/test/README.txt" does not have a trailing slash, nginx returns a redirect with a trailing slash added, much like it does when requesting a directory without a trailing shash. -- Maxim Dounin http://mdounin.ru/ From ivan at wikiloc.com Wed Sep 5 14:24:42 2018 From: ivan at wikiloc.com (Ivan Bianchi) Date: Wed, 5 Sep 2018 16:24:42 +0200 Subject: Rewrite with number after hyphen In-Reply-To: <20180903123630.GH3537@daoine.org> References: <20180903123630.GH3537@daoine.org> Message-ID: Hi Francis, many thanks for your response and guidelines. Indeed you were right that there was another location capturing the request. Best regards, On Mon, Sep 3, 2018 at 2:36 PM Francis Daly wrote: > On Mon, Sep 03, 2018 at 08:13:11AM +0200, Ivan Bianchi wrote: > > Hi there, > > > > location /foo { > > > rewrite /foo/(.*) /web/foo.do?a=$1 last; > > > } > > This seems to work as expected for me, using nginx/1.14.0. > > > KO: > > > > > https://www.test.com/foo/asdf-12 > > Why do you think it does not work? What is the input/output/expected > output? > > For example, if you add the new location > > location = /web/foo.do { > return 200 "$uri$is_args$args\n"; > } > > and repeat the tests, do you see any difference in output? > > > Why if I put a number after a hyphen the regex stops working? > > My guesses are: > > * you have another location{} that you have configured to match those > requests, so your shown location{} is not involved > > or > > * your /web/foo.do location-handler handles those requests differently. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Ivan Bianchi Wikiloc -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Wed Sep 5 15:51:22 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Sep 2018 16:51:22 +0100 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: <1eeee126f25389d8beb15e92e7ffc3e6.NginxMailingListEnglish@forum.nginx.org> References: <1eeee126f25389d8beb15e92e7ffc3e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180905155122.GN3537@daoine.org> On Tue, Sep 04, 2018 at 02:30:18PM -0400, Frank_Mascarell wrote: Hi there, > I have a VPS on Digital Ocean with Ubuntu 18.04, Nginx, Gunicorn, Django, > and a test web application, all configured (ufw) to work with http: 80. > Everything works perfectly. > Now I modify the file /sites-available/LibrosWeb to allow SSL traffic with a > self-signed certificate, since I do not have a domain. > Result "Error 502 Bad Gateway". > This is the initial code that works well with http: 80: > location / { > include proxy_params; > proxy_pass http://unix:/run/gunicorn.sock; > } > And this is the code to allow SSL (error 502): > location / { > include proxy_params; > proxy_pass https://unix:/run/gunicorn.sock; > } Unless you changed something on the gunicorn side, you almost certainly want to use http, not https, to the socket. So change the proxy_pass back to what it was. The first tutorial you linked to does include some troubleshooting tips. If you still have a problem, including the output from the nginx parts of those will probably help the next person. Good luck with it, f -- Francis Daly francis at daoine.org From Jason.Whittington at equifax.com Wed Sep 5 18:29:28 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Wed, 5 Sep 2018 18:29:28 +0000 Subject: [IE] Re: Rewrite with number after hyphen In-Reply-To: References: <20180903123630.GH3537@daoine.org> Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A434150FD9@STLEISEXCMBX3.eis.equifax.com> FWIW when I debug this sort of thing I like to emit a response header identifying which rule is routing the request, like this: location /a/ { add_header X-nginx-debug /a/ proxy_pass http://whatever/; } That way you can use F12 tools or some other inspection on the result and see exactly what is triggering. This has saved my bacon more than once ?. Jason > > location /foo { > > rewrite /foo/(.*) /web/foo.do?a=$1 last; > > } From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Ivan Bianchi Sent: Wednesday, September 5, 2018 9:25 AM To: nginx at nginx.org Subject: [IE] Re: Rewrite with number after hyphen Hi Francis, many thanks for your response and guidelines. Indeed you were right that there was another location capturing the request. Best regards, On Mon, Sep 3, 2018 at 2:36 PM Francis Daly > wrote: On Mon, Sep 03, 2018 at 08:13:11AM +0200, Ivan Bianchi wrote: Hi there, > > location /foo { > > rewrite /foo/(.*) /web/foo.do?a=$1 last; > > } This seems to work as expected for me, using nginx/1.14.0. > KO: > > > https://www.test.com/foo/asdf-12 Why do you think it does not work? What is the input/output/expected output? For example, if you add the new location location = /web/foo.do { return 200 "$uri$is_args$args\n"; } and repeat the tests, do you see any difference in output? > Why if I put a number after a hyphen the regex stops working? My guesses are: * you have another location{} that you have configured to match those requests, so your shown location{} is not involved or * your /web/foo.do location-handler handles those requests differently. Good luck with it, f -- Francis Daly francis at daoine.org _______________________________________________ nginx mailing list nginx at nginx.org http://mailman.nginx.org/mailman/listinfo/nginx -- Ivan Bianchi Wikiloc This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 5 20:49:19 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Wed, 05 Sep 2018 16:49:19 -0400 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: <20180905155122.GN3537@daoine.org> References: <20180905155122.GN3537@daoine.org> Message-ID: <84d4023fa9a7b6c8b2f2f55bd17aa690.NginxMailingListEnglish@forum.nginx.org> It has also tried the proxy_passs an http, with the same error. This is like finding a needle in a pocket: stressful and disappointing. root at BaseVPS-ubuntu1804-django20:~# systemctl status gunicorn ? gunicorn.service - gunicorn daemon Loaded: loaded (/etc/systemd/system/gunicorn.service; disabled; vendor preset: enabled) Active: failed (Result: exit-code) since Wed 2018-09-05 20:34:38 UTC; 12min ago Process: 8842 ExecStart=/home/gela/.virtualenvs/django20/bin/gunicorn --access-logfile - --workers 3 --bind unix:/run/gunicorn.sock LibrosWeb.wsgi:application (code=exited, status=1/FAILURE) Main PID: 8842 (code=exited, status=1/FAILURE) sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: self.stop() sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: File "/home/gela/.virtualenvs/django20/lib/python3.6/site-packages/gunicorn/arbiter.py", line 393, in stop sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: time.sleep(0.1) sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: File "/home/gela/.virtualenvs/django20/lib/python3.6/site-packages/gunicorn/arbiter.py", line 245, in handle_chld sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: self.reap_workers() sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: File "/home/gela/.virtualenvs/django20/lib/python3.6/site-packages/gunicorn/arbiter.py", line 525, in reap_workers sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: raise HaltServer(reason, self.WORKER_BOOT_ERROR) sep 05 20:34:38 BaseVPS-ubuntu1804-django20 gunicorn[8842]: gunicorn.errors.HaltServer: sep 05 20:34:38 BaseVPS-ubuntu1804-django20 systemd[1]: gunicorn.service: Main process exited, code=exited, status=1/FAILURE sep 05 20:34:38 BaseVPS-ubuntu1804-django20 systemd[1]: gunicorn.service: Failed with result 'exit-code'. root at BaseVPS-ubuntu1804-django20:~# systemctl status gunicorn.socket Failed to dump process list, ignoring: No such file or directory ? gunicorn.socket - gunicorn socket Loaded: loaded (/etc/systemd/system/gunicorn.socket; enabled; vendor preset: enabled) Active: active (listening) since Wed 2018-09-05 20:34:37 UTC; 13min ago Listen: /run/gunicorn.sock (Stream) CGroup: /system.slice/gunicorn.socket sep 05 20:34:37 BaseVPS-ubuntu1804-django20 systemd[1]: Listening on gunicorn socket. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281099,281110#msg-281110 From francis at daoine.org Wed Sep 5 21:37:24 2018 From: francis at daoine.org (Francis Daly) Date: Wed, 5 Sep 2018 22:37:24 +0100 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: <84d4023fa9a7b6c8b2f2f55bd17aa690.NginxMailingListEnglish@forum.nginx.org> References: <20180905155122.GN3537@daoine.org> <84d4023fa9a7b6c8b2f2f55bd17aa690.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180905213724.GO3537@daoine.org> On Wed, Sep 05, 2018 at 04:49:19PM -0400, Frank_Mascarell wrote: Hi there, > It has also tried the proxy_passs an http, with the same error. Can you run a command like "curl -v https://15.15.15.15/test", and show the output that you get? And if it is curl reporting that it does not like the certificate, try again with curl -k -v https://15.15.15.15/test And if that shows that things are working, try the same with whatever url you were using originally, until the problem shows. > This is like finding a needle in a pocket: stressful and disappointing. I suspect that the best way through it is to test one thing at a time, and change one thing between tests. https://www.digitalocean.com/community/tutorials/how-to-set-up-django-with-postgres-nginx-and-gunicorn-on-ubuntu-18-04 does have a section called "Nginx Is Displaying a 502 Bad Gateway Error Instead of the Django Application", which sounds like what you are reporting. Its first question seems to be "what does the nginx log say?". Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Wed Sep 5 22:51:38 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Wed, 05 Sep 2018 18:51:38 -0400 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: <20180905213724.GO3537@daoine.org> References: <20180905213724.GO3537@daoine.org> Message-ID: <35379857fc343edb8ad69c56a5f5c055.NginxMailingListEnglish@forum.nginx.org> root at BaseVPS-ubuntu1804-django20:~# curl -v https://15.15.15.15/test * Trying 15.15.15.15... * TCP_NODELAY set * Connected to 15.15.15.15 (15.15.15.15) port 443 (#0) * ALPN, offering h2 * ALPN, offering http/1.1 * successfully set certificate verify locations: * CAfile: /etc/ssl/certs/ca-certificates.crt CApath: /etc/ssl/certs * TLSv1.2 (OUT), TLS handshake, Client hello (1): * TLSv1.2 (IN), TLS handshake, Server hello (2): * TLSv1.2 (IN), TLS handshake, Certificate (11): * TLSv1.2 (OUT), TLS alert, Server hello (2): * SSL certificate problem: self signed certificate * Closing connection 0 curl: (60) SSL certificate problem: self signed certificate More details here: https://curl.haxx.se/docs/sslcerts.html curl failed to verify the legitimacy of the server and therefore could not establish a secure connection to it. To learn more about this situation and how to fix it, please visit the web page mentioned above. root at BaseVPS-ubuntu1804-django20:~# tail -F /var/log/nginx/error.log 2018/09/05 13:41:02 [crit] 3975#3975: *38 SSL_do_handshake() failed (SSL: error:1417D102:SSL routines:tls_process_client_hello:unsupported protocol) while SSL handshaking, client: 221.212.99.106, server: 0.0.0.0:443 2018/09/05 13:41:03 [crit] 3975#3975: *39 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 221.212.99.106, server: 0.0.0.0:443 2018/09/05 16:19:31 [crit] 3975#3975: *48 SSL_do_handshake() failed (SSL: error:1417D18C:SSL routines:tls_process_client_hello:version too low) while SSL handshaking, client: 198.108.66.16, server: 0.0.0.0:443 2018/09/05 18:20:12 [error] 3975#3975: *52 connect() to unix:/run/gunicorn.sock failed (111: Connection refused) while connecting to upstream, client: 139.162.116.133, server: 15.15.15.15, request: "GET / HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/", host: "15.15.15.15" 2018/09/05 19:45:39 [crit] 3975#3975: *56 SSL_do_handshake() failed (SSL: error:1417D102:SSL routines:tls_process_client_hello:unsupported protocol) while SSL handshaking, client: 80.82.70.118, server: 0.0.0.0:443 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281099,281112#msg-281112 From peter.volkov at gmail.com Thu Sep 6 04:06:31 2018 From: peter.volkov at gmail.com (Peter Volkov) Date: Thu, 6 Sep 2018 07:06:31 +0300 Subject: nginx sends 301 redirect for alias in location In-Reply-To: <20180905122537.GR56558@mdounin.ru> References: <20180905122537.GR56558@mdounin.ru> Message-ID: On Wed, Sep 5, 2018 at 3:25 PM, Maxim Dounin wrote: > On Wed, Sep 05, 2018 at 09:58:54AM +0300, Peter Volkov wrote: > > > Hi. Could you, please, explain. Why nginx sends 301 redirect for the > > following vhost: > > > > server { > > listen 80; > > server_name test.domain.tv ; > > > > access_log off; > > > > location = /test/README.txt { > > alias /var/www/; > > } > > } > > > > Here is redirect: > > > > $ http http://test.domain.tv/test/README.txt > > HTTP/1.1 301 Moved Permanently > > Connection: keep-alive > > Content-Length: 178 > > Content-Type: text/html > > Date: Wed, 05 Sep 2018 06:55:27 GMT > > Keep-Alive: timeout=20 > > Location: http://test.domain.tv/test/README.txt/ > > Server: nginx > > > > > > 301 Moved Permanently > > > >

301 Moved Permanently

> >
nginx
> > > > > > You've aliased "/test/README.txt" into a directory "/var/www/". > Since the URI "/test/README.txt" does not have a trailing slash, > nginx returns a redirect with a trailing slash added, much like it > does when requesting a directory without a trailing shash. > Thank you, Maxim. That was the problem! -- Peter. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ivan at wikiloc.com Thu Sep 6 06:35:51 2018 From: ivan at wikiloc.com (Ivan Bianchi) Date: Thu, 6 Sep 2018 08:35:51 +0200 Subject: [IE] Re: Rewrite with number after hyphen In-Reply-To: <995C5C9AD54A3C419AF1C20A8B6AB9A434150FD9@STLEISEXCMBX3.eis.equifax.com> References: <20180903123630.GH3537@daoine.org> <995C5C9AD54A3C419AF1C20A8B6AB9A434150FD9@STLEISEXCMBX3.eis.equifax.com> Message-ID: Hi Jason, that's a very nice tip! I finally get it enabling *rewrite_log* and *error_log* at *notice*. but this definitely seems a great alternative. Many thanks, On Wed, Sep 5, 2018 at 8:29 PM Jason Whittington < Jason.Whittington at equifax.com> wrote: > FWIW when I debug this sort of thing I like to emit a response header > identifying which rule is routing the request, like this: > > > > location /a/ { > > *add_header X-nginx-debug /a/* > > proxy_pass http://whatever/; > > } > > > > That way you can use F12 tools or some other inspection on the result and > see exactly what is triggering. This has saved my bacon more than once J. > > > > Jason > > > > > > > > location /foo { > > > rewrite /foo/(.*) /web/foo.do?a=$1 last; > > > } > > *From:* nginx [mailto:nginx-bounces at nginx.org] *On Behalf Of *Ivan Bianchi > *Sent:* Wednesday, September 5, 2018 9:25 AM > *To:* nginx at nginx.org > *Subject:* [IE] Re: Rewrite with number after hyphen > > > > Hi Francis, > > > > many thanks for your response and guidelines. Indeed you were right that > there was another location capturing the request. > > > > Best regards, > > > > On Mon, Sep 3, 2018 at 2:36 PM Francis Daly wrote: > > On Mon, Sep 03, 2018 at 08:13:11AM +0200, Ivan Bianchi wrote: > > Hi there, > > > > location /foo { > > > rewrite /foo/(.*) /web/foo.do?a=$1 last; > > > } > > This seems to work as expected for me, using nginx/1.14.0. > > > KO: > > > > > https://www.test.com/foo/asdf-12 > > Why do you think it does not work? What is the input/output/expected > output? > > For example, if you add the new location > > location = /web/foo.do { > return 200 "$uri$is_args$args\n"; > } > > and repeat the tests, do you see any difference in output? > > > Why if I put a number after a hyphen the regex stops working? > > My guesses are: > > * you have another location{} that you have configured to match those > requests, so your shown location{} is not involved > > or > > * your /web/foo.do location-handler handles those requests differently. > > Good luck with it, > > f > -- > Francis Daly francis at daoine.org > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > > -- > > Ivan Bianchi > > Wikiloc > This message contains proprietary information from Equifax which may be > confidential. If you are not an intended recipient, please refrain from any > disclosure, copying, distribution or use of this information and note that > such actions are prohibited. If you have received this transmission in > error, please notify by e-mail postmaster at equifax.com. Equifax? is a > registered trademark of Equifax Inc. All rights reserved. > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Ivan Bianchi Wikiloc -------------- next part -------------- An HTML attachment was scrubbed... URL: From francis at daoine.org Thu Sep 6 07:00:14 2018 From: francis at daoine.org (Francis Daly) Date: Thu, 6 Sep 2018 08:00:14 +0100 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: <35379857fc343edb8ad69c56a5f5c055.NginxMailingListEnglish@forum.nginx.org> References: <20180905213724.GO3537@daoine.org> <35379857fc343edb8ad69c56a5f5c055.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180906070014.GP3537@daoine.org> On Wed, Sep 05, 2018 at 06:51:38PM -0400, Frank_Mascarell wrote: Hi there, > root at BaseVPS-ubuntu1804-django20:~# curl -v https://15.15.15.15/test > * SSL certificate problem: self signed certificate > * Closing connection 0 > curl: (60) SSL certificate problem: self signed certificate Ok, that's useful. It is not the 502 error, but it is something. This is "the client (curl) does not like the fact that the server is presenting a self-signed certificate". One way to tell the client to accept the certificate is to add " -k" to the command line. However, the older nginx log entry... > 2018/09/05 18:20:12 [error] 3975#3975: *52 connect() to > unix:/run/gunicorn.sock failed (111: Connection refused) while connecting to suggests that at that time, nginx was unable to connect to gunicorn. That is usually a configuration (access control) problem outside of nginx's control; if that problem persists, you may want to check the gunicorn config or logs to see what it thinks is happening. > upstream, client: 139.162.116.133, server: 15.15.15.15, request: "GET / > HTTP/1.1", upstream: "http://unix:/run/gunicorn.sock:/", host: > "15.15.15.15" Good luck with it, f -- Francis Daly francis at daoine.org From nginx-forum at forum.nginx.org Thu Sep 6 21:28:01 2018 From: nginx-forum at forum.nginx.org (jeffin.joy) Date: Thu, 06 Sep 2018 17:28:01 -0400 Subject: Nginx openssl Async mode support Message-ID: <886d109cb7dc0da5eb11c7612316da59.NginxMailingListEnglish@forum.nginx.org> Hi Team, I am new to Nginx and I am developing a new OpenSSL Dynamic engine which supports OpenSSL async mode. I have verified the async mode function via speed command provided by OpenSSL. Now I need to integrate the OpenSSL with Nginx. In all the reference it showing that It requires Intel QAT engine support. In my case, I am using my own engine. What all the configuration requires to test this.? Is there any dependency for QAT framework. ? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281121,281121#msg-281121 From nginx-forum at forum.nginx.org Fri Sep 7 10:44:58 2018 From: nginx-forum at forum.nginx.org (Fumitaka Yanase) Date: Fri, 07 Sep 2018 06:44:58 -0400 Subject: Will nginx return 502 without any log in certain case? Message-ID: <39a6a4725eca8fc217ef231ff26de883.NginxMailingListEnglish@forum.nginx.org> I have my nginx running on EC2(Amazon Linux) behind ALB (Load Balancer of AWS). Usually it works just fine but very rarely ALB receive 502 bad gateway form the EC2 instance. I checked both access.log and error.log of nginx but there is no log for 502 bad gateway. We asked AWS about the reason of 502, but they told us it should be problem of web server running on EC2 instance. After some googling, I found an article that when nginx sends TCP RST or TCP FIN, it will return 502 without any log output. So I'm suggesting I am facing this case, but it there any way to figure out whether it is so or not? And if so, is there any way to get some information about why TCP RST or TCP FIN is caused? Thanks in advance. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281122,281122#msg-281122 From nginx-forum at forum.nginx.org Fri Sep 7 10:54:51 2018 From: nginx-forum at forum.nginx.org (littlevk) Date: Fri, 07 Sep 2018 06:54:51 -0400 Subject: Django proxy_pass redirect issues Message-ID: Hello, I faced an issue with nginx proxy_pass to a Django app. I configured nginx server to this django: ####### server { listen 443 ssl; server_name mydjango.com; ssl on; ssl_certificate /opt/ssl/nginx/mydjango.crt; ssl_certificate_key /opt/ssl/nginx/mydjango.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; client_max_body_size 120M; #charset koi8-r; access_log /var/log/nginx/backend.mydjango.app.log main; error_log /var/log/nginx/backend.mydjango.app.error.log error; location / { proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header X-Real-IP $remote_addr; proxy_set_header Host $host; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://backend.mydjango.app:3080/; proxy_redirect off; } } ####### But connecting to NginX reverse proxy (https://mydjango.com) django starts redirecting and finish with a bad request changing in my browser to: http://127.0.0.1:5002 It seems I forgot some proxy header but I tried some combinations and I dont find the good one. Thanks in advance, Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281123,281123#msg-281123 From nginx-forum at forum.nginx.org Sun Sep 9 03:25:32 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Sat, 08 Sep 2018 23:25:32 -0400 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: <1eeee126f25389d8beb15e92e7ffc3e6.NginxMailingListEnglish@forum.nginx.org> References: <1eeee126f25389d8beb15e92e7ffc3e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: Greetings, I think my days of suffering are over. After reading hundreds of logs, I found the problem. An update of Whitenoise to 4.0 where you must change the shape of the configuration, caused that with my old configuration the gunicorn service will throw errors. The rest is all right. http://whitenoise.evans.io/en/stable/django.html#django-middleware Francis Daly thanks for the help. Good day. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281099,281128#msg-281128 From mdounin at mdounin.ru Mon Sep 10 11:56:59 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Sep 2018 14:56:59 +0300 Subject: Will nginx return 502 without any log in certain case? In-Reply-To: <39a6a4725eca8fc217ef231ff26de883.NginxMailingListEnglish@forum.nginx.org> References: <39a6a4725eca8fc217ef231ff26de883.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180910115659.GB56558@mdounin.ru> Hello! On Fri, Sep 07, 2018 at 06:44:58AM -0400, Fumitaka Yanase wrote: > I have my nginx running on EC2(Amazon Linux) behind ALB (Load Balancer of > AWS). > Usually it works just fine but very rarely ALB receive 502 bad gateway form > the EC2 instance. > > I checked both access.log and error.log of nginx but there is no log for 502 > bad gateway. > We asked AWS about the reason of 502, but they told us it should be problem > of web server running on EC2 instance. > > After some googling, I found an article that when nginx sends TCP RST or TCP > FIN, it will return 502 without any log output. > So I'm suggesting I am facing this case, but it there any way to figure out > whether it is so or not? > And if so, is there any way to get some information about why TCP RST or TCP > FIN is caused? If a HTTP response with status code 502 is returned by nginx, it a) logs a message explaining the problem to the error log at the "error" level and b) logs the request and the response status to the access log. If, however, a 502 error is generated by AWS load balancer, the exact reason for the error is only known on the AWS load balancer side. Depending on what exactly happened, there may be something in nginx logs (not necessary an error) or nothing at all (for example, if the load balancer wasn't able to connect to nginx). If you want to further debug this, first of all you may want to find out where the errors you are seeing are generated. -- Maxim Dounin http://mdounin.ru/ From francis at daoine.org Mon Sep 10 11:58:49 2018 From: francis at daoine.org (Francis Daly) Date: Mon, 10 Sep 2018 12:58:49 +0100 Subject: Problem when reconfiguring Nginx for SSL with self-signed certificate In-Reply-To: References: <1eeee126f25389d8beb15e92e7ffc3e6.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180910115849.GS3537@daoine.org> On Sat, Sep 08, 2018 at 11:25:32PM -0400, Frank_Mascarell wrote: Hi there, > I found the problem. An update of Whitenoise to 4.0 where you must > change the shape of the configuration, caused that with my old configuration > the gunicorn service will throw errors. The rest is all right. > > http://whitenoise.evans.io/en/stable/django.html#django-middleware Good that you found and fixed the problem. And thanks for sharing the answer with the mailing list -- the next person with the same problem will be very happy to take advantage of your response. Cheers, f -- Francis Daly francis at daoine.org From postmaster at palvelin.fi Mon Sep 10 12:06:35 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Mon, 10 Sep 2018 15:06:35 +0300 Subject: Config problems with Amplify Message-ID: <47257196-AB5A-4581-B429-4492DFA4B807@palvelin.fi> Hi, my nginx config hierarchy is: /etc/nginx/nginx.conf (commented out except for a single include directive of /etc/nginx/conf.d/*.conf) /etc/nginx/conf.d/default.conf (server-wider config directives and an include of /etc/nginx/sites-enabled/*.conf) /etc/nginx/sites-enabled/domainX.conf (multiple vhost conf files each named accordingly) With default configuration, the amplify service doesn?t receive any data and I have two Nginx entries in my Amplify Graphs page. One with /etc/nginx/nginx.conf and one with /etc/nginx/conf.d/default.conf If I add: configfile = /etc/nginx/conf.d/default.conf to my /etc/amplify-agent/agent.conf (and restart it), I get data to the /etc/nginx/conf.d/default.conf but not the /etc/nginx/nginx.conf entry. How should I setup Amplify to get a single entry (or should each vhost actually have it?s own entry)? -- Palvelin.fi Hostmaster postmaster at palvelin.fi From maxim at nginx.com Mon Sep 10 12:45:02 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 10 Sep 2018 15:45:02 +0300 Subject: Config problems with Amplify In-Reply-To: <47257196-AB5A-4581-B429-4492DFA4B807@palvelin.fi> References: <47257196-AB5A-4581-B429-4492DFA4B807@palvelin.fi> Message-ID: <7a9f5c5b-ace1-3103-4a02-5aa7a6c452ca@nginx.com> Hi Palvelin, it makes sense to open a support case -- click on a chat icon in the bottom-right corner. On 10/09/2018 15:06, Palvelin Postmaster via nginx wrote: > Hi, > > my nginx config hierarchy is: > > /etc/nginx/nginx.conf (commented out except for a single include directive of /etc/nginx/conf.d/*.conf) > /etc/nginx/conf.d/default.conf (server-wider config directives and an include of /etc/nginx/sites-enabled/*.conf) > /etc/nginx/sites-enabled/domainX.conf (multiple vhost conf files each named accordingly) > > With default configuration, the amplify service doesn?t receive any data and I have two Nginx entries in my Amplify Graphs page. One with /etc/nginx/nginx.conf and one with /etc/nginx/conf.d/default.conf > > If I add: > > configfile = /etc/nginx/conf.d/default.conf > > to my /etc/amplify-agent/agent.conf (and restart it), I get data to the /etc/nginx/conf.d/default.conf but not the /etc/nginx/nginx.conf entry. > > How should I setup Amplify to get a single entry (or should each vhost actually have it?s own entry)? > > -- > Palvelin.fi Hostmaster > postmaster at palvelin.fi > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- Maxim Konovalov From postmaster at palvelin.fi Mon Sep 10 13:19:55 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Mon, 10 Sep 2018 16:19:55 +0300 Subject: Config problems with Amplify In-Reply-To: <7a9f5c5b-ace1-3103-4a02-5aa7a6c452ca@nginx.com> References: <47257196-AB5A-4581-B429-4492DFA4B807@palvelin.fi> <7a9f5c5b-ace1-3103-4a02-5aa7a6c452ca@nginx.com> Message-ID: <578FFB3B-D438-4FCD-B196-D13735DABD20@palvelin.fi> What chat icon? Where? > On 10 Sep 2018, at 15:45, Maxim Konovalov wrote: > > Hi Palvelin, > > it makes sense to open a support case -- click on a chat icon in the > bottom-right corner. > > On 10/09/2018 15:06, Palvelin Postmaster via nginx wrote: >> Hi, >> >> my nginx config hierarchy is: >> >> /etc/nginx/nginx.conf (commented out except for a single include directive of /etc/nginx/conf.d/*.conf) >> /etc/nginx/conf.d/default.conf (server-wider config directives and an include of /etc/nginx/sites-enabled/*.conf) >> /etc/nginx/sites-enabled/domainX.conf (multiple vhost conf files each named accordingly) >> >> With default configuration, the amplify service doesn?t receive any data and I have two Nginx entries in my Amplify Graphs page. One with /etc/nginx/nginx.conf and one with /etc/nginx/conf.d/default.conf >> >> If I add: >> >> configfile = /etc/nginx/conf.d/default.conf >> >> to my /etc/amplify-agent/agent.conf (and restart it), I get data to the /etc/nginx/conf.d/default.conf but not the /etc/nginx/nginx.conf entry. >> >> How should I setup Amplify to get a single entry (or should each vhost actually have it?s own entry)? >> >> -- >> Palvelin.fi Hostmaster >> postmaster at palvelin.fi >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx >> > > > -- > Maxim Konovalov -- Palvelin.fi Hostmaster postmaster at palvelin.fi From mdounin at mdounin.ru Mon Sep 10 13:21:11 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 10 Sep 2018 16:21:11 +0300 Subject: Nginx openssl Async mode support In-Reply-To: <886d109cb7dc0da5eb11c7612316da59.NginxMailingListEnglish@forum.nginx.org> References: <886d109cb7dc0da5eb11c7612316da59.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180910132110.GD56558@mdounin.ru> Hello! On Thu, Sep 06, 2018 at 05:28:01PM -0400, jeffin.joy wrote: > I am new to Nginx and I am developing a new OpenSSL Dynamic engine > which supports OpenSSL async mode. I have verified the async mode function > via speed command provided by OpenSSL. > > Now I need to integrate the OpenSSL with Nginx. In all the reference > it showing that It requires Intel QAT engine support. In my case, I am using > my own engine. > > What all the configuration requires to test this.? > Is there any dependency for QAT framework. ? There is no support for OpenSSL async operations in nginx. There were some patches developed by Intel as part of the QAT work, though these were never submitted for inclusion into nginx (and probably not ready for it). To use nginx with OpenSSL async operations you'll need these patches or an equivalent. -- Maxim Dounin http://mdounin.ru/ From maxim at nginx.com Mon Sep 10 13:27:47 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 10 Sep 2018 16:27:47 +0300 Subject: Config problems with Amplify In-Reply-To: <578FFB3B-D438-4FCD-B196-D13735DABD20@palvelin.fi> References: <47257196-AB5A-4581-B429-4492DFA4B807@palvelin.fi> <7a9f5c5b-ace1-3103-4a02-5aa7a6c452ca@nginx.com> <578FFB3B-D438-4FCD-B196-D13735DABD20@palvelin.fi> Message-ID: <695315c1-e6a9-c94e-f567-84d6d566aca6@nginx.com> On 10/09/2018 16:19, Palvelin Postmaster via nginx wrote: > What chat icon? Where? Please see on the screenshort attached. -- Maxim Konovalov -------------- next part -------------- A non-text attachment was scrubbed... Name: amplify-analyzer.png Type: image/png Size: 372591 bytes Desc: not available URL: From postmaster at palvelin.fi Mon Sep 10 13:29:21 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Mon, 10 Sep 2018 16:29:21 +0300 Subject: Config problems with Amplify In-Reply-To: <695315c1-e6a9-c94e-f567-84d6d566aca6@nginx.com> References: <47257196-AB5A-4581-B429-4492DFA4B807@palvelin.fi> <7a9f5c5b-ace1-3103-4a02-5aa7a6c452ca@nginx.com> <578FFB3B-D438-4FCD-B196-D13735DABD20@palvelin.fi> <695315c1-e6a9-c94e-f567-84d6d566aca6@nginx.com> Message-ID: > On 10 Sep 2018, at 16:27, Maxim Konovalov wrote: > > On 10/09/2018 16:19, Palvelin Postmaster via nginx wrote: >> What chat icon? Where? > > Please see on the screenshort attached. Facepalm myself :D -- Palvelin.fi Hostmaster postmaster at palvelin.fi From roger at netskrt.io Mon Sep 10 16:18:32 2018 From: roger at netskrt.io (Roger Fischer) Date: Mon, 10 Sep 2018 09:18:32 -0700 Subject: Ignore Certificate Errors In-Reply-To: <20180830181344.GD56558@mdounin.ru> References: <20180830181344.GD56558@mdounin.ru> Message-ID: <993398DD-D9F6-4F65-AB1B-8AF170D329B1@netskrt.io> Hello, I eventually found out that the problem was a missing ?proxy_ssl_server_name on;?. Without the Server Name Indication (SNI) in the TLS handshake, the server returns a certificate that causes this problem. I am also wondering if these days the default should be on. It seems that SNI is in widespread use. Roger > On Aug 30, 2018, at 11:13 AM, Maxim Dounin wrote: > > Hello! > > On Thu, Aug 30, 2018 at 09:09:44AM -0700, Roger Fischer wrote: > >> Hello, >> >> is there a way to make NGINX more forgiving on TLS certificate errors? Or would that have to be done in OpenSSL instead? >> >> When I use openssl s_client, I get the following errors from the upstream server: >> >> 140226185430680:error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01:rsa_pk1.c:103: >> 140226185430680:error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed:rsa_eay.c:705: >> 140226185430680:error:1408D07B:SSL routines:ssl3_get_key_exchange:bad signature:s3_clnt.c:2010: >> >> This causes NGINX (reverse proxy) to return 502 Bad Gateway to the browser. >> >> The NGINX error log shows: >> >> 2018/08/29 09:09:59 [crit] 11633#11633: *28 SSL_do_handshake() failed (SSL: error:0407006A:rsa routines:RSA_padding_check_PKCS1_type_1:block type is not 01 error:04067072:rsa routines:RSA_EAY_PUBLIC_DECRYPT:padding check failed error:1408D07B:SSL routines:ssl3_get_key_exchange:bad signature) while SSL handshaking to upstream, client: 192.168.1.66, server: s5.example.com, request: "GET /xyz >> >> I have added ?proxy_ssl_verify off;?, but that did not make any difference. >> >> Surprisingly, the browser (directly to the upstream server) does not complain about the TLS error. >> >> Is there anything else I can do either in NGINX or openssl to suppress the 502 Bad Gateway? >> >> Thanks? >> >> Roger >> >> PS: I don?t have control over the upstream server, so I can?t fix the root cause (faulty certificate). > > As per the error message, the problem seems to be not with the > cerifitcate, but with the key exchange during the SSL handshake. > For some reason signature verification after the key exchange > fails due to wrong padding. > > Most likely the problem is specific to some ciphers, so forcing a > different cipher with proxy_ssl_ciphers could help, see > http://nginx.org/r/proxy_ssl_ciphers. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From roger at netskrt.io Mon Sep 10 18:06:00 2018 From: roger at netskrt.io (Roger Fischer) Date: Mon, 10 Sep 2018 11:06:00 -0700 Subject: Add Header to cached response? Message-ID: <61E0CD98-DCD3-4E26-982D-253A43588455@netskrt.io> Hello, is there a way to add a header to the cached response? I have used ngx_http_headers_module?s add_header directive, which adds the header to the response at the time the response is generated. What I would like to do is to add a response header at the time when the upstream request is made (reflecting the state of the caching request, not the used-the-cache request). Thanks? Roger From mdounin at mdounin.ru Tue Sep 11 01:04:46 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 11 Sep 2018 04:04:46 +0300 Subject: Add Header to cached response? In-Reply-To: <61E0CD98-DCD3-4E26-982D-253A43588455@netskrt.io> References: <61E0CD98-DCD3-4E26-982D-253A43588455@netskrt.io> Message-ID: <20180911010446.GI56558@mdounin.ru> Hello! On Mon, Sep 10, 2018 at 11:06:00AM -0700, Roger Fischer wrote: > is there a way to add a header to the cached response? > > I have used ngx_http_headers_module?s add_header directive, > which adds the header to the response at the time the response > is generated. > > What I would like to do is to add a response header at the time > when the upstream request is made (reflecting the state of the > caching request, not the used-the-cache request). Cache saves the response as received from the upstream server, without any modifications. If you want to modify it somehow before the response is saved into the cache, you can do this either on the upstream server itself, or by using additional proxying. -- Maxim Dounin http://mdounin.ru/ From quintinpar at gmail.com Tue Sep 11 23:45:42 2018 From: quintinpar at gmail.com (Quintin Par) Date: Tue, 11 Sep 2018 16:45:42 -0700 Subject: Avoiding Nginx restart when rsyncing cache across machines Message-ID: I run a mini CDN for a static site by having Nginx cache machines (in different locations) in front of the origin and load balanced by Cloudflare. Periodically I run rsync pull to update the cache on each of these machines. Works well, except that I realized I need to restart Nginx and reload isn?t updating the cache in memory. Really want to avoid the restart. Is this possible? Or maybe I am doing something wrong here. - Quintin -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 12 14:45:58 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 12 Sep 2018 17:45:58 +0300 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: Message-ID: <20180912144558.GR56558@mdounin.ru> Hello! On Tue, Sep 11, 2018 at 04:45:42PM -0700, Quintin Par wrote: > I run a mini CDN for a static site by having Nginx cache machines (in > different locations) in front of the origin and load balanced by Cloudflare. > > Periodically I run rsync pull to update the cache on each of these > machines. Works well, except that I realized I need to restart Nginx and > reload isn?t updating the cache in memory. > > Really want to avoid the restart. Is this possible? Or maybe I am doing > something wrong here. You are not expected to modify cache contents yourself. Doing so will likely cause various troubles - including not using the new files placed into the cache after it was loaded from the disk, not maintaining configured cache max_size and so on. If you want to control cache contents yourself by syncing data across machines, you may have better luck by using proxy_store and normal files instead. -- Maxim Dounin http://mdounin.ru/ From quintinpar at gmail.com Wed Sep 12 19:41:15 2018 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 12 Sep 2018 12:41:15 -0700 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: <20180912144558.GR56558@mdounin.ru> References: <20180912144558.GR56558@mdounin.ru> Message-ID: Hi Maxim, Thank you for this. Opened my eyes. Not to sounds demanding, but do you have any examples (code) of proxy_store bring used as a CDN. What?s most important to me in the initial cache warming. I should be able to start a new machine with 30 GB of cache vs. a cold start. Thanks once again. - Quintin On Wed, Sep 12, 2018 at 7:46 AM Maxim Dounin wrote: > Hello! > > On Tue, Sep 11, 2018 at 04:45:42PM -0700, Quintin Par wrote: > > > I run a mini CDN for a static site by having Nginx cache machines (in > > different locations) in front of the origin and load balanced by > Cloudflare. > > > > Periodically I run rsync pull to update the cache on each of these > > machines. Works well, except that I realized I need to restart Nginx and > > reload isn?t updating the cache in memory. > > > > Really want to avoid the restart. Is this possible? Or maybe I am doing > > something wrong here. > > You are not expected to modify cache contents yourself. Doing so > will likely cause various troubles - including not using the new > files placed into the cache after it was loaded from the disk, not > maintaining configured cache max_size and so on. > > If you want to control cache contents yourself by syncing data > across machines, you may have better luck by using proxy_store > and normal files instead. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Wed Sep 12 20:05:57 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Wed, 12 Sep 2018 20:05:57 +0000 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: <20180912144558.GR56558@mdounin.ru> Message-ID: <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> Can I ask, why do you need to start with a warm cache directly? Sure it will lower the requests to the origin, but you could implement a secondary caching layer if you wanted to (using nginx), so you?d have your primary cache in let?s say 10 locations, let's say spread across 3 continents (US, EU, Asia), then you could have a second layer that consist of a smaller amount of locations (1 instance in each continent) - this way you'll warm up faster when you add new servers, and it won't really affect your origin server. It's a lot more clean also because you're able to use proxy_cache which is really what (in my opinion) you should use when you're building caching proxies. Generally I'd just slowly warm up new servers prior to putting them into production, get a list of top X files accessed, and loop over them to pull them in as a normal http request. There's plenty of decent solutions (some more complex than others), but there should really never be a reason to having to sync your cache across machines - even for new servers. From quintinpar at gmail.com Wed Sep 12 23:23:44 2018 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 12 Sep 2018 16:23:44 -0700 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> Message-ID: Hi Lucas, The cache is pretty big and I want to limit unnecessary requests if I can. Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request. Admittedly I have a not so complex cache architecture. i.e. all cache machines in front of the origin and it has worked so far. This is also because I am not that great a programmer/admin :-) My optimization is not primarily around hits to the origin, but rather bandwidth and number of requests. - Quintin On Wed, Sep 12, 2018 at 1:06 PM Lucas Rolff wrote: > Can I ask, why do you need to start with a warm cache directly? Sure it > will lower the requests to the origin, but you could implement a secondary > caching layer if you wanted to (using nginx), so you?d have your primary > cache in let?s say 10 locations, let's say spread across 3 continents (US, > EU, Asia), then you could have a second layer that consist of a smaller > amount of locations (1 instance in each continent) - this way you'll warm > up faster when you add new servers, and it won't really affect your origin > server. > > It's a lot more clean also because you're able to use proxy_cache which is > really what (in my opinion) you should use when you're building caching > proxies. > > Generally I'd just slowly warm up new servers prior to putting them into > production, get a list of top X files accessed, and loop over them to pull > them in as a normal http request. > > There's plenty of decent solutions (some more complex than others), but > there should really never be a reason to having to sync your cache across > machines - even for new servers. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From peter_booth at me.com Wed Sep 12 23:33:37 2018 From: peter_booth at me.com (Peter Booth) Date: Wed, 12 Sep 2018 19:33:37 -0400 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> Message-ID: <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> Quintin, Are most of your requests for dynamic or static content? Are the requests clustered such that there is a lot of requests for a few (between 5 and 200, say) URLs? If three different people make same request do they get personalized or identical content returned? How long are the cached resources valid for? I have seen layered caches deliver enormous benefit both in terms of performance and ensuring availability- which is usually synonymous with ?protecting teh backend.? That protection was most useful when, for example I was working on a site that would get mentioned in a tv show at known time of the day every week. nginx proxy_cache was invaluable at helping the site stay up and responsive when hit with enormous spikes of requests. This is nuanced, subtle stuff though. Is your site something that you can disclose publicly? Peter > On 12 Sep 2018, at 7:23 PM, Quintin Par wrote: > > Hi Lucas, > > The cache is pretty big and I want to limit unnecessary requests if I can. Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request. > > Admittedly I have a not so complex cache architecture. i.e. all cache machines in front of the origin and it has worked so far. This is also because I am not that great a programmer/admin :-) > > My optimization is not primarily around hits to the origin, but rather bandwidth and number of requests. > > > - Quintin > > > On Wed, Sep 12, 2018 at 1:06 PM Lucas Rolff > wrote: > Can I ask, why do you need to start with a warm cache directly? Sure it will lower the requests to the origin, but you could implement a secondary caching layer if you wanted to (using nginx), so you?d have your primary cache in let?s say 10 locations, let's say spread across 3 continents (US, EU, Asia), then you could have a second layer that consist of a smaller amount of locations (1 instance in each continent) - this way you'll warm up faster when you add new servers, and it won't really affect your origin server. > > It's a lot more clean also because you're able to use proxy_cache which is really what (in my opinion) you should use when you're building caching proxies. > > Generally I'd just slowly warm up new servers prior to putting them into production, get a list of top X files accessed, and loop over them to pull them in as a normal http request. > > There's plenty of decent solutions (some more complex than others), but there should really never be a reason to having to sync your cache across machines - even for new servers. > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From quintinpar at gmail.com Thu Sep 13 06:14:25 2018 From: quintinpar at gmail.com (Quintin Par) Date: Wed, 12 Sep 2018 23:14:25 -0700 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> Message-ID: Hi Peter, Here are my stats for this week: https://imgur.com/a/JloZ37h . The Bypass is only because I was experimenting with some cache warmer scripts. This is primarily a static website. Here?s my URL hit distribution: https://imgur.com/a/DRJUjPc If three people are making the same request, they get identical content. No personalization. The pages are cached for 200 days and inactive in proxy_cache_path set to 60 days. This is embarrassing but my CDNs are primarily $5 digital ocean machines across the web with this Nginx cache setup. The server response time averages at 0.29 seconds. Prior to doing my ghetto CDNing this was at 0.98 seconds. I am pretty proud that I have survived several Slashdot effects on the $5 machines serving cached content peaking at 2500 requests/second without any issues. Since this is working well, I don?t want to do any layered caching, unless there is a compelling reason. - Quintin On Wed, Sep 12, 2018 at 4:32 PM Peter Booth via nginx wrote: > Quintin, > > Are most of your requests for dynamic or static content? > Are the requests clustered such that there is a lot of requests for a few > (between 5 and 200, say) URLs? > If three different people make same request do they get personalized or > identical content returned? > How long are the cached resources valid for? > > I have seen layered caches deliver enormous benefit both in terms of > performance and ensuring availability- which is usually > synonymous with ?protecting teh backend.? That protection was most useful > when, for example > I was working on a site that would get mentioned in a tv show at known > time of the day every week. > nginx proxy_cache was invaluable at helping the site stay up and > responsive when hit with enormous spikes of requests. > > This is nuanced, subtle stuff though. > > Is your site something that you can disclose publicly? > > > Peter > > > > On 12 Sep 2018, at 7:23 PM, Quintin Par wrote: > > Hi Lucas, > > > The cache is pretty big and I want to limit unnecessary requests if I can. > Cloudflare is in front of my machines and I pay for load balancing, > firewall, Argo among others. So there is a cost per request. > > > Admittedly I have a not so complex cache architecture. i.e. all cache > machines in front of the origin and it has worked so far. This is also > because I am not that great a programmer/admin :-) > > > My optimization is not primarily around hits to the origin, but rather > bandwidth and number of requests. > > > > - Quintin > > > On Wed, Sep 12, 2018 at 1:06 PM Lucas Rolff wrote: > >> Can I ask, why do you need to start with a warm cache directly? Sure it >> will lower the requests to the origin, but you could implement a secondary >> caching layer if you wanted to (using nginx), so you?d have your primary >> cache in let?s say 10 locations, let's say spread across 3 continents (US, >> EU, Asia), then you could have a second layer that consist of a smaller >> amount of locations (1 instance in each continent) - this way you'll warm >> up faster when you add new servers, and it won't really affect your origin >> server. >> >> It's a lot more clean also because you're able to use proxy_cache which >> is really what (in my opinion) you should use when you're building caching >> proxies. >> >> Generally I'd just slowly warm up new servers prior to putting them into >> production, get a list of top X files accessed, and loop over them to pull >> them in as a normal http request. >> >> There's plenty of decent solutions (some more complex than others), but >> there should really never be a reason to having to sync your cache across >> machines - even for new servers. >> >> _______________________________________________ >> nginx mailing list >> nginx at nginx.org >> http://mailman.nginx.org/mailman/listinfo/nginx > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Thu Sep 13 06:39:32 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 13 Sep 2018 06:39:32 +0000 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> Message-ID: <7CE38F03-2E2C-4B57-B122-BE915D6A6A7F@lucasrolff.com> > The cache is pretty big and I want to limit unnecessary requests if I can. 30gb of cache and ~ 400k hits isn?t a lot. > Cloudflare is in front of my machines and I pay for load balancing, firewall, Argo among others. So there is a cost per request. Doesn?t matter if you pay for load balancing, firewall, argo etc ? implementing a secondary caching layer won?t increase your costs on the CloudFlare side of things, because you?re not communicating via CloudFlare but rather between machines ? you?d connect your X amount of locations to a smaller amount of locations, doing direct traffic between your DigitalOcean instances ? so no CloudFlare costs involved. Communication between your CDN servers and your origin server also (IMO) shouldn?t go via any CloudFlare related products, so additional hits on the origin will be ?free? in the expense of a bit higher load ? however since it would be only a subset of locations that would request via the origin, and they then serve as the origin for your other servers ? you?re effectively decreasing the origin traffic. You should easily be able to get a 97-99% offload of your origin (in my own setup, it?s at 99.95% at this point), even without using a secondary layer, and performance can get improved by using stuff such as: http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_background_update http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale_updating Nginx is smart enough to do a sub-request in the background to check if the origin request updated (using modified or etags e.g) ? this way the origin communication would be little anyway. The only Load Balancer / Argo / Firewall costs you should have is the ?CDN Server -> end user? traffic, and that won?t increase or decrease by doing a normal proxy_cache setup or a setup with a secondary cache layer. You also won?t increase costs by doing a warmup of your CDN servers ? you could do something as simple as: curl -o /dev/null -k -I --resolve cdn.yourdomain.com:80:127.0.0.1 https://cdn.yourdomain.com/img/logo.png You could do the same with python or another language if you?re feeling more comfortable there. However using a method like above, will result in your warmup being kept ?local?, since you?re resolving the cdn.yourdomain.com to localhost, requests that are not yet cached will use whatever is configured in your proxy_pass in the nginx config. > Admittedly I have a not so complex cache architecture. i.e. all cache machines in front of the origin and it has worked so far I would say it?s complex if you have to sync your content ? many pull based CDN?s simply do a normal proxy_cache + proxy_pass setup, not syncing content, and then using some of the nifty features (such as proxy_cache_background_update and proxy_cache_use_stale_updating) to decrease the origin traffic, or possibly implementing a secondary layer if they?re still doing a lot of origin traffic (e.g. because of having a lot of ?edge servers?) ? if you?re like 10 servers, I wouldn?t even consider a secondary layer unless your origin is under heavy load and can?t handle 10 possible clients (CDN Servers). Best Regards, Lucas Rolff From mdounin at mdounin.ru Thu Sep 13 10:47:53 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Sep 2018 13:47:53 +0300 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: <20180912144558.GR56558@mdounin.ru> Message-ID: <20180913104753.GT56558@mdounin.ru> Hello! On Wed, Sep 12, 2018 at 12:41:15PM -0700, Quintin Par wrote: > Not to sounds demanding, but do you have any examples (code) of proxy_store > bring used as a CDN. What?s most important to me in the initial cache > warming. I should be able to start a new machine with 30 GB of cache vs. a > cold start. Simple examples of using proxy_store can be found in the documentation, see here: http://nginx.org/r/proxy_store It usually works well when you need to mirror static files which are never changed. Note though that if you need to implement cache experiation, or need to preserve custom response headers, this might be a challenge. -- Maxim Dounin http://mdounin.ru/ From ciapnz at gmail.com Thu Sep 13 18:26:31 2018 From: ciapnz at gmail.com (Danila Vershinin) Date: Thu, 13 Sep 2018 21:26:31 +0300 Subject: SSL stream to HTTP2 server Message-ID: Hello, I?m trying to basically use nginx as replacement to hitch (for Varnish). Request goes like this: browser ? nginx (stream SSL) ? varnish (HTTP2 on) ? backend HTTP stream { server { listen 443 ssl; ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; proxy_pass 127.0.0.1:6081; proxy_protocol on; } } With the above, I?m getting HTTP/1.1 in browser. When I replace nginx with hitch, I get HTTP/2. From Hitch docs: "Hitch will transmit the selected protocol as part of its PROXY header? Does nginx have same capability? In general, is nginx capable of being SSL terminator for HTTP/2 backends using TCP streams? (while delivering HTTP/2 to supporting clients). I?m interested in using TCP streams since only those will allow use of PROXY protocol to upstream. Best Regards, Danila -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mdounin at mdounin.ru Thu Sep 13 18:42:33 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 13 Sep 2018 21:42:33 +0300 Subject: SSL stream to HTTP2 server In-Reply-To: References: Message-ID: <20180913184233.GY56558@mdounin.ru> Hello! On Thu, Sep 13, 2018 at 09:26:31PM +0300, Danila Vershinin wrote: > Hello, > > I?m trying to basically use nginx as replacement to hitch (for Varnish). > > Request goes like this: browser ? nginx (stream SSL) ? varnish (HTTP2 on) ? backend HTTP > > stream { > server { > listen 443 ssl; > ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; > ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; > proxy_pass 127.0.0.1:6081; > proxy_protocol on; > } > } > > With the above, I?m getting HTTP/1.1 in browser. > When I replace nginx with hitch, I get HTTP/2. > > From Hitch docs: "Hitch will transmit the selected protocol as part of its PROXY header? Does nginx have same capability? > > In general, is nginx capable of being SSL terminator for HTTP/2 backends using TCP streams? (while delivering HTTP/2 to supporting clients). I?m interested in using TCP streams since only those will allow use of PROXY protocol to upstream. Currently no, as stream module in nginx cannot be configured to choose a parituclar ALPN protocol when terminating SSL. -- Maxim Dounin http://mdounin.ru/ From ciapnz at gmail.com Thu Sep 13 18:44:35 2018 From: ciapnz at gmail.com (Danila Vershinin) Date: Thu, 13 Sep 2018 21:44:35 +0300 Subject: SSL stream to HTTP2 server In-Reply-To: <20180913184233.GY56558@mdounin.ru> References: <20180913184233.GY56558@mdounin.ru> Message-ID: <4F6BBD1D-569D-4C39-9A16-9AB46C968A50@gmail.com> Hi, Are the any plans to add this feature? If one has less software to run stuff, and if hitch can be avoided in some use cases, I think that would be a plus. Thanks for you answer. Best Regards, Danila > On 13 Sep 2018, at 21:42, Maxim Dounin wrote: > > Hello! > > On Thu, Sep 13, 2018 at 09:26:31PM +0300, Danila Vershinin wrote: > >> Hello, >> >> I?m trying to basically use nginx as replacement to hitch (for Varnish). >> >> Request goes like this: browser ? nginx (stream SSL) ? varnish (HTTP2 on) ? backend HTTP >> >> stream { >> server { >> listen 443 ssl; >> ssl_certificate /etc/letsencrypt/live/example.com/fullchain.pem; >> ssl_certificate_key /etc/letsencrypt/live/example.com/privkey.pem; >> proxy_pass 127.0.0.1:6081; >> proxy_protocol on; >> } >> } >> >> With the above, I?m getting HTTP/1.1 in browser. >> When I replace nginx with hitch, I get HTTP/2. >> >> From Hitch docs: "Hitch will transmit the selected protocol as part of its PROXY header? Does nginx have same capability? >> >> In general, is nginx capable of being SSL terminator for HTTP/2 backends using TCP streams? (while delivering HTTP/2 to supporting clients). I?m interested in using TCP streams since only those will allow use of PROXY protocol to upstream. > > Currently no, as stream module in nginx cannot be configured to > choose a parituclar ALPN protocol when terminating SSL. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From quintinpar at gmail.com Thu Sep 13 18:45:43 2018 From: quintinpar at gmail.com (Quintin Par) Date: Thu, 13 Sep 2018 11:45:43 -0700 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: <7CE38F03-2E2C-4B57-B122-BE915D6A6A7F@lucasrolff.com> References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> <7CE38F03-2E2C-4B57-B122-BE915D6A6A7F@lucasrolff.com> Message-ID: Hi Lucas, Thank you for this. GEM all over. I didn?t know curl had ?resolve. This is a more a generic question: How does one ensure cache consistency on all edges? Do people resort to a combination of expiry + background update + stale responding? What if one edge and the origin was updated to the latest and I now want all the other 1000 edges updates within a minute but the content expiry is 100 days. - Quintin On Wed, Sep 12, 2018 at 11:39 PM Lucas Rolff wrote: > > The cache is pretty big and I want to limit unnecessary requests if I > can. > > 30gb of cache and ~ 400k hits isn?t a lot. > > > Cloudflare is in front of my machines and I pay for load balancing, > firewall, Argo among others. So there is a cost per request. > > Doesn?t matter if you pay for load balancing, firewall, argo etc ? > implementing a secondary caching layer won?t increase your costs on the > CloudFlare side of things, because you?re not communicating via CloudFlare > but rather between machines ? you?d connect your X amount of locations to a > smaller amount of locations, doing direct traffic between your DigitalOcean > instances ? so no CloudFlare costs involved. > > Communication between your CDN servers and your origin server also (IMO) > shouldn?t go via any CloudFlare related products, so additional hits on the > origin will be ?free? in the expense of a bit higher load ? however since > it would be only a subset of locations that would request via the origin, > and they then serve as the origin for your other servers ? you?re > effectively decreasing the origin traffic. > > You should easily be able to get a 97-99% offload of your origin (in my > own setup, it?s at 99.95% at this point), even without using a secondary > layer, and performance can get improved by using stuff such as: > > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_background_update > > > http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_use_stale_updating > > Nginx is smart enough to do a sub-request in the background to check if > the origin request updated (using modified or etags e.g) ? this way the > origin communication would be little anyway. > > The only Load Balancer / Argo / Firewall costs you should have is the ?CDN > Server -> end user? traffic, and that won?t increase or decrease by doing a > normal proxy_cache setup or a setup with a secondary cache layer. > > You also won?t increase costs by doing a warmup of your CDN servers ? you > could do something as simple as: > > curl -o /dev/null -k -I --resolve cdn.yourdomain.com:80:127.0.0.1 > https://cdn.yourdomain.com/img/logo.png > > You could do the same with python or another language if you?re feeling > more comfortable there. > > However using a method like above, will result in your warmup being kept > ?local?, since you?re resolving the cdn.yourdomain.com to localhost, > requests that are not yet cached will use whatever is configured in your > proxy_pass in the nginx config. > > > Admittedly I have a not so complex cache architecture. i.e. all cache > machines in front of the origin and it has worked so far > > I would say it?s complex if you have to sync your content ? many pull > based CDN?s simply do a normal proxy_cache + proxy_pass setup, not syncing > content, and then using some of the nifty features (such as > proxy_cache_background_update and proxy_cache_use_stale_updating) to > decrease the origin traffic, or possibly implementing a secondary layer if > they?re still doing a lot of origin traffic (e.g. because of having a lot > of ?edge servers?) ? if you?re like 10 servers, I wouldn?t even consider a > secondary layer unless your origin is under heavy load and can?t handle 10 > possible clients (CDN Servers). > > Best Regards, > Lucas Rolff > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From lucas at lucasrolff.com Thu Sep 13 20:03:31 2018 From: lucas at lucasrolff.com (Lucas Rolff) Date: Thu, 13 Sep 2018 20:03:31 +0000 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> <7CE38F03-2E2C-4B57-B122-BE915D6A6A7F@lucasrolff.com> Message-ID: <7B059D67-8321-4410-9C7B-814A8839D552@lucasrolff.com> > How does one ensure cache consistency on all edges? I wouldn't - you can never really rely on anything being consistent cached, there will always be stuff that doesn't follow the standards and thus can give an inconsistent state for one or more users. What I'd do, would simply to be to purge the files whenever needed (and possibly warm them up if you want them to be "hot" when visitors arrive), sure the first 1-2 visitors in each location might have a bit slower request, but that's about it. Alternatively you could just put a super low cache-control, when you're using proxy_cache_background_update and proxy_cache_use_stale_updating, nginx will ask the origin server if the file has changed - so if it haven't you'll simply get a 304 from the origin (if the origin supports it) - so you'll do more requests to the origin, but traffic will be minimal because it just returns 304 not modified (plus some more headers). Best Regards, Lucas Rolff From peter_booth at me.com Fri Sep 14 01:22:16 2018 From: peter_booth at me.com (Peter Booth) Date: Thu, 13 Sep 2018 21:22:16 -0400 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: <7B059D67-8321-4410-9C7B-814A8839D552@lucasrolff.com> References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> <7CE38F03-2E2C-4B57-B122-BE915D6A6A7F@lucasrolff.com> <7B059D67-8321-4410-9C7B-814A8839D552@lucasrolff.com> Message-ID: One more approach is to not change the contents of resources without also changing their name. One example would be the cache_key feature in Rails, where resources have a path based on some ID and their updated_at value. Whenever you modify a resource it automatically expires. Sent from my iPhone On Sep 13, 2018, at 4:03 PM, Lucas Rolff wrote: >> How does one ensure cache consistency on all edges? > > I wouldn't - you can never really rely on anything being consistent cached, there will always be stuff that doesn't follow the standards and thus can give an inconsistent state for one or more users. > > What I'd do, would simply to be to purge the files whenever needed (and possibly warm them up if you want them to be "hot" when visitors arrive), sure the first 1-2 visitors in each location might have a bit slower request, but that's about it. > > Alternatively you could just put a super low cache-control, when you're using proxy_cache_background_update and proxy_cache_use_stale_updating, nginx will ask the origin server if the file has changed - so if it haven't you'll simply get a 304 from the origin (if the origin supports it) - so you'll do more requests to the origin, but traffic will be minimal because it just returns 304 not modified (plus some more headers). > > Best Regards, > Lucas Rolff > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From nginx-forum at forum.nginx.org Fri Sep 14 06:20:35 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Fri, 14 Sep 2018 02:20:35 -0400 Subject: Avoiding Nginx restart when rsyncing cache across machines In-Reply-To: References: Message-ID: <0912cee0416e53ba4bec1fc66d39c7e7.NginxMailingListEnglish@forum.nginx.org> It is fairly simple to hack nginx and use Lua to reload the cache timed or via a request. The code is already there, its just a matter of calling it again. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281179,281225#msg-281225 From nginx-forum at forum.nginx.org Fri Sep 14 07:52:03 2018 From: nginx-forum at forum.nginx.org (orsolya.magos) Date: Fri, 14 Sep 2018 03:52:03 -0400 Subject: nginx as nonroot - setsockopt not permitted Message-ID: <225c2834306a8b301399ccacc0095a09.NginxMailingListEnglish@forum.nginx.org> Hi, we use nginx which load-balances toward our snmptrapd. Everything is working fine if we start nginx with root. We would like to change it so nginx (workers) would start with nginx user. I couldn't make it work, do you have any idea what additional thing can I set/check? nginx -V nginx version: nginx/1.12.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --prefix=/usr/share/nginx --sbin-path=/usr/sbin/nginx --modules-path=/usr/lib64/nginx/modules --conf-path=/etc/nginx/nginx.conf --error-log-path=/var/log/nginx/error.log --http-log-path=/var/log/nginx/access.log --http-client-body-temp-path=/var/lib/nginx/tmp/client_body --http-proxy-temp-path=/var/lib/nginx/tmp/proxy --http-fastcgi-temp-path=/var/lib/nginx/tmp/fastcgi --http-uwsgi-temp-path=/var/lib/nginx/tmp/uwsgi --http-scgi-temp-path=/var/lib/nginx/tmp/scgi --pid-path=/run/nginx.pid --lock-path=/run/lock/subsys/nginx --user=nginx --group=nginx --with-file-aio --with-ipv6 --with-http_auth_request_module --with-http_ssl_module --with-http_v2_module --with-http_realip_module --with-http_addition_module --with-http_xslt_module=dynamic --with-http_image_filter_module=dynamic --with-http_geoip_module=dynamic --with-http_sub_module --with-http_dav_module --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_degradation_module --with-http_slice_module --with-http_stub_status_module --with-http_perl_module=dynamic --with-mail=dynamic --with-mail_ssl_module --with-pcre --with-pcre-jit --with-stream=dynamic --with-stream_ssl_module --with-google_perftools_module --with-debug --with-cc-opt='-O2 -g -pipe -Wall -Wp,-D_FORTIFY_SOURCE=2 -fexceptions -fstack-protector-strong --param=ssp-buffer-size=4 -grecord-gcc-switches -specs=/usr/lib/rpm/redhat/redhat-hardened-cc1 -m64 -mtune=generic' --with-ld-opt='-Wl,-z,relro -specs=/usr/lib/rpm/redhat/redhat-hardened-ld -Wl,-E' uname -a Linux c-1 4.14.62-1.el7.centos.ncir.1.x86_64 #1 SMP Wed Aug 15 04:24:17 EEST 2018 x86_64 x86_64 x86_64 GNU/Linux -------------------------------------------------------------------------------------------------- observation 0) with root user (master+workers) everything works fine, snmptrapd gets the traps -------------------------------------------------------------------------------------------------- observation 1) idea: playing with setcap config: * with nginx user (master is root, workers are started with nginx user, so in /etc/nginx/nginx.conf 'user nginx;' line is included) root 2703077 0.0 0.0 59028 2280 ? Ss 11:34 0:00 nginx: master process /usr/sbin/nginx nginx 2703078 0.0 0.0 59476 4160 ? S 11:34 0:00 nginx: worker process nginx 2703079 0.0 0.0 59476 4840 ? S 11:34 0:00 nginx: worker process nginx 2703080 0.0 0.0 59476 4840 ? S 11:34 0:00 nginx: worker process ... etc. * upstream port is 162, snmptrapd is listening there * I've tried both capacities: setcap cap_net_bind_service=+ep /sbin/nginx setcap cap_net_admin+ep /sbin/nginx * /etc/nginx/conf.d/stream/snmptrap.conf upstream snmptrap_upstream { #server x.y.z.226:162; #commented out for easier testing #server x.y.z:227162; #commented out for easier testing server x.y.z.228:162; } server { listen z.y.z.225:162 udp; proxy_pass snmptrap_upstream; proxy_timeout 1s; proxy_responses 0; proxy_bind $remote_addr transparent; error_log /var/log/nginx/snmptrap.log; } * also tried out switching off iptables netstat -ulpn | grep 162 udp 0 0 x.y.z.228:162 0.0.0.0:* 2748327/snmptrapd udp 0 0 x.y.z.225:162 0.0.0.0:* 2743096/nginx: mast /var/log/nginx/snmptrap.log: 2018/09/12 11:55:04 [alert] 2739785#0: *23 setsockopt(IP_TRANSPARENT) failed (1: Operation not permitted) while connecting to upstream, udp client: x.y.z.225, server: x.y.z.225:162, upstream: "x.y.z.228:162", bytes from/to client:5/0, bytes from/to upstream:0/0 /var/log/nginx/stream.log: error 500 is coming 2018-09-12T11:55:04+03:00 x.y.z.225 UDP 500 0 5 0.000 "0" "0" "0.000" -------------------------------------------------------------------------------------------------- observation 2) idea: trying an other upstream port (>1024), but still the same: config: * with nginx user (master is root, workers are started with nginx user, so in /etc/nginx/nginx.conf 'user nginx;' line is included) * upstream port is 4162 * /etc/nginx/conf.d/stream/snmptrap.conf upstream snmptrap_upstream { #server x.y.z.226:162; #commentedout for easier testing #server x.y.z:227162; #commented out for easier testing server x.y.z.228:4162; } server { listen z.y.z.225:162 udp; proxy_pass snmptrap_upstream; proxy_timeout 1s; proxy_responses 0; proxy_bind $remote_addr transparent; error_log /var/log/nginx/snmptrap.log; } * also tried out switching off iptables netstat -ulpn | grep 162 udp 0 0 x.y.z.228:4162 0.0.0.0:* 2748327/snmptrapd udp 0 0 x.y.z.225:162 0.0.0.0:* 2743096/nginx: mast /var/log/nginx/snmptrap.log: 2018/09/12 11:08:03 [alert] 121472#0: *112642 setsockopt(IP_TRANSPARENT) failed (1: Operation not permitted) while connecting to upstream, udp client: x.y.z.225, server: x.y.z.225:162, upstream: "x.y.z.228:4162", bytes from/to client:5/0, bytes from/to upstream:0/0 /var/log/nginx/stream.log: error 500 is coming 2018-09-12T11:08:03+03:00 x.y.z.225 UDP 500 0 5 0.000 "0" "0" "0.000" Thanks in advance: Orsi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281226,281226#msg-281226 From mdounin at mdounin.ru Fri Sep 14 11:58:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Fri, 14 Sep 2018 14:58:06 +0300 Subject: nginx as nonroot - setsockopt not permitted In-Reply-To: <225c2834306a8b301399ccacc0095a09.NginxMailingListEnglish@forum.nginx.org> References: <225c2834306a8b301399ccacc0095a09.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180914115806.GA56558@mdounin.ru> Hello! On Fri, Sep 14, 2018 at 03:52:03AM -0400, orsolya.magos wrote: > we use nginx which load-balances toward our snmptrapd. Everything is working > fine if we start nginx with root. We would like to change it so nginx > (workers) would start with nginx user. I couldn't make it work, do you have > any idea what additional thing can I set/check? > > nginx -V > nginx version: nginx/1.12.2 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) Update to nginx 1.13.8+, it should be able to use transparent proxying on Linux without workers being run as root: *) Feature: now nginx automatically preserves the CAP_NET_RAW capability in worker processes when using the "transparent" parameter of the "proxy_bind", "fastcgi_bind", "memcached_bind", "scgi_bind", and "uwsgi_bind" directives. Alternatively, consider not using "proxy_bind ... transparent". See docs here for additional details: http://nginx.org/en/docs/stream/ngx_stream_proxy_module.html#proxy_bind -- Maxim Dounin http://mdounin.ru/ From venefax at gmail.com Fri Sep 14 12:05:57 2018 From: venefax at gmail.com (Saint Michael) Date: Fri, 14 Sep 2018 08:05:57 -0400 Subject: Question In-Reply-To: References: <20180912144558.GR56558@mdounin.ru> <6A9596AC-6977-49AC-B1E8-A10EC0AF2452@lucasrolff.com> <2A29C8E7-5D5F-401F-87C4-4DDFC8A42766@me.com> <7CE38F03-2E2C-4B57-B122-BE915D6A6A7F@lucasrolff.com> <7B059D67-8321-4410-9C7B-814A8839D552@lucasrolff.com> Message-ID: > > I am a new developer and need to publish several database tables with > relationship one to many, etc,. What web framework is fastest to learn ? I > am looking at Mojolicios or Catalyst, but don?t know if they are necessary > or not. For a new project, what parts would you choose? I have read the > Nginx is the best application server, don?t know if I need anything else. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Fri Sep 14 13:13:48 2018 From: nginx-forum at forum.nginx.org (orsolya.magos) Date: Fri, 14 Sep 2018 09:13:48 -0400 Subject: nginx as nonroot - setsockopt not permitted In-Reply-To: <20180914115806.GA56558@mdounin.ru> References: <20180914115806.GA56558@mdounin.ru> Message-ID: <6dbc0cb17b799d94adc2dc1d39c19cc6.NginxMailingListEnglish@forum.nginx.org> Wow great, thanks Maxim for the super fast answer! We are using epel version, still investigating the possibilities of version change. Br, Orsi Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281226,281230#msg-281230 From gheorghe.nica at baml.com Fri Sep 14 20:59:16 2018 From: gheorghe.nica at baml.com (Nica, George) Date: Fri, 14 Sep 2018 20:59:16 +0000 Subject: WWW-Authenticate in 200 OK response Message-ID: <6116053d258a4daca656eda87456ea81@baml.com> I am currently working on a multi-tier application, trying to use nginx as load balancer. The issue is that nginx seems to be adding WWW-Authenticate in the 200 OK response after the Kerberos authentication has taken place, which confuses the client. (The client could potentially ignore it, but that's possibly another issue.) Not sure this is expected... Any suggestion on how to avoid or work around this? [2018-09-14 14:46:14.471] root INFO: @@@@@@ Connecting to: 'http://host1:39609/url1' send: 'GET /url1 HTTP/1.1\r\nX-Client-User-Name: uname1\r\nAccept-Encoding: gzip\r\nConnection: close\r\nAccept: application/json\r\nUser-Agent: qz.qzdev.run\r\nHost: host1:39609\r\nX-Client-Host-Name: host2\r\nContent-Type: application/json\r\n\r\n' reply: 'HTTP/1.1 401 Unauthorized\r\n' header: Server: nginx/1.14.0 header: Date: Fri, 14 Sep 2018 18:46:14 GMT header: Content-Type: text/html header: Content-Length: 195 header: Connection: close header: WWW-Authenticate: Negotiate header: WWW-Authenticate: Basic realm="" header: Access-Control-Allow-Credentials: true send: 'GET /url1 HTTP/1.1\r\nX-Client-User-Name: uname1\r\nAccept-Encoding: gzip\r\nConnection: close\r\nAccept: application/json\r\nUser-Agent: qz.qzdev.run\r\nHost: host1:39609\r\nX-Client-Host-Name: host2\r\nContent-Type: application/json\r\nAuthorization: Negotiate YII........................ AghEw==\r\n\r\n' reply: 'HTTP/1.1 200 OK\r\n' header: Server: nginx/1.14.0 header: Date: Fri, 14 Sep 2018 18:46:14 GMT header: Content-Type: application/json header: Content-Length: 430908 header: Connection: close header: WWW-Authenticate: Negotiate YI .....gA== header: WWW-Authenticate: Basic realm="" header: Set-Cookie: session=ey...ZW4; HttpOnly; Path=/ header: Access-Control-Allow-Credentials: true [2018-09-14 14:46:14.779] client_http_auth CRITICAL: GSSAPI failed! Best regards, George ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Fri Sep 14 23:19:16 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 15 Sep 2018 02:19:16 +0300 Subject: WWW-Authenticate in 200 OK response In-Reply-To: <6116053d258a4daca656eda87456ea81@baml.com> References: <6116053d258a4daca656eda87456ea81@baml.com> Message-ID: <20180914231916.GC56558@mdounin.ru> Hello! On Fri, Sep 14, 2018 at 08:59:16PM +0000, Nica, George via nginx wrote: > I am currently working on a multi-tier application, trying to use nginx as load balancer. > The issue is that nginx seems to be adding WWW-Authenticate in the 200 OK response after the Kerberos authentication has taken place, which confuses the client. (The client could potentially ignore it, but that's possibly another issue.) > Not sure this is expected... Any suggestion on how to avoid or work around this? > > [2018-09-14 14:46:14.471] root INFO: @@@@@@ Connecting to: 'http://host1:39609/url1' > send: 'GET /url1 HTTP/1.1\r\nX-Client-User-Name: uname1\r\nAccept-Encoding: gzip\r\nConnection: close\r\nAccept: application/json\r\nUser-Agent: qz.qzdev.run\r\nHost: host1:39609\r\nX-Client-Host-Name: host2\r\nContent-Type: application/json\r\n\r\n' > reply: 'HTTP/1.1 401 Unauthorized\r\n' > header: Server: nginx/1.14.0 > header: Date: Fri, 14 Sep 2018 18:46:14 GMT > header: Content-Type: text/html > header: Content-Length: 195 > header: Connection: close > header: WWW-Authenticate: Negotiate > header: WWW-Authenticate: Basic realm="" > header: Access-Control-Allow-Credentials: true > send: 'GET /url1 HTTP/1.1\r\nX-Client-User-Name: uname1\r\nAccept-Encoding: gzip\r\nConnection: close\r\nAccept: application/json\r\nUser-Agent: qz.qzdev.run\r\nHost: host1:39609\r\nX-Client-Host-Name: host2\r\nContent-Type: application/json\r\nAuthorization: Negotiate YII........................ AghEw==\r\n\r\n' > reply: 'HTTP/1.1 200 OK\r\n' > header: Server: nginx/1.14.0 > header: Date: Fri, 14 Sep 2018 18:46:14 GMT > header: Content-Type: application/json > header: Content-Length: 430908 > header: Connection: close > header: WWW-Authenticate: Negotiate YI .....gA== > header: WWW-Authenticate: Basic realm="" > header: Set-Cookie: session=ey...ZW4; HttpOnly; Path=/ > header: Access-Control-Allow-Credentials: true > [2018-09-14 14:46:14.779] client_http_auth CRITICAL: GSSAPI failed! It looks like you are trying to use "WWW-Authenticate: Negotiate" AKA Integrated Windows Authentication, AKA NTLM authentication. Unfortunately, this authentication scheme was designed without following HTTP basic concepts, and authenticates a connection instead of requests. As such, this authentication scheme cannot work though a generic HTTP proxy. For NTLM authentication to work though a proxy, it needs to keep connections to the backend server alive and bound to corresponding client connections. The best solution would be to avoid using NTLM authentication for anything more complex than directly connected servers in intranets. If you can't do this for some reason, consider using the "ntlm" directive, which is available as part of our commercial version, see http://nginx.org/r/ntlm. -- Maxim Dounin http://mdounin.ru/ From marwan.bayadsi at amdocs.com Mon Sep 17 12:18:28 2018 From: marwan.bayadsi at amdocs.com (Marwan Bayadsi) Date: Mon, 17 Sep 2018 12:18:28 +0000 Subject: How to use dynamic IP in resolver directive when NGINX installed on Multi Nodes Openshift cluster Message-ID: <8C57D97BF3785A4A8066B7EB3E6CEB1801F1F8306E@ILRAADAGBE3.corp.amdocs.com> Hi, We're from Amdocs and trying to install NGINX reverse proxy server on openshift cluster (6 nodes), and as part of its configuration, we must specify the DNS IP to directive - 'resolver' (because proxy_pass has some parameters). Nginx.conf: Location ~ { resolver DNS_IP valid=5s; proxy_pass ..... } But - since it's OCP cluster (multiple machines), we don't know which IP to give - as it depends on which node the nginx pod was started (in /etc/resolv.conf - it has the relevant node IP). Also - we cannot put all IPs in parameters. We ask for advice to: 1. How we can force nginx to user /etc/resolv.conf even if proxy_pass has parameters. This will solve our issues. 2. If #1 is not possible, which display name we can put for resolver within open-shift cluster ? for example - in K8S, we solved the issue by using kube-dns: resolver kube-dns.kube-system.svc.cluster.local valid=5s; Which string is relevant for DNS inside open-shift cluster ? Please assist. Thanks, Marwan This message and the information contained herein is proprietary and confidential and subject to the Amdocs policy statement, you may review at https://www.amdocs.com/about/email-disclaimer -------------- next part -------------- An HTML attachment was scrubbed... URL: From gheorghe.nica at baml.com Mon Sep 17 21:18:51 2018 From: gheorghe.nica at baml.com (Nica, George) Date: Mon, 17 Sep 2018 21:18:51 +0000 Subject: WWW-Authenticate in 200 OK response In-Reply-To: <20180914231916.GC56558@mdounin.ru> References: <6116053d258a4daca656eda87456ea81@baml.com> <20180914231916.GC56558@mdounin.ru> Message-ID: <924609531a934ca08dfb4f72abacb437@baml.com> Thank you Maxim. We are using Kerberos, on Linux. And per-request authentication, we are not trying to use session-level authentication. Would the ntlm module help here? We are already using spnego-http-auth-nginx-module to help with SPNego/GSSAPI. So our issue/incompatibility seems to be between backend / nginx with spnego-http-auth-nginx-module / client. The first two sending/passing the extra headers on the response and the client getting confused by it. As you say, nginx is a generic HTTP proxy here, so we will have to figure things out with our server / client / spnego-http-auth-nginx-module. Are there any other suggested approaches regarding using nginx and Kerberos? FYI, this is the output of "nginx -V": nginx version: nginx/1.14.0 built by gcc 7.3.0 (GCC) built with OpenSSL 1.1.0h 27 Mar 2018 TLS SNI support enabled configure arguments: --without-http_rewrite_module --without-http_gzip_module --with-http_stub_status_module --with-ld-opt='-L /efs/dist/kerberos/mit/1.14.6/exec/lib' --add-module=spnego-http-auth-nginx-module --with-http_ssl_module --with-openssl=/home/gnica/nginx/1.14.0_2/common/usr/lib/openssl --prefix=/home/gnica/nginx/1.14.0_2/common/usr -----Original Message----- From: Maxim Dounin [mailto:mdounin at mdounin.ru] Sent: Friday, September 14, 2018 7:19 PM To: Nica, George via nginx Cc: Nica, George Subject: Re: WWW-Authenticate in 200 OK response Hello! On Fri, Sep 14, 2018 at 08:59:16PM +0000, Nica, George via nginx wrote: > I am currently working on a multi-tier application, trying to use nginx as load balancer. > The issue is that nginx seems to be adding WWW-Authenticate in the 200 OK response after the Kerberos authentication has taken place, which confuses the client. (The client could potentially ignore it, but that's possibly another issue.) > Not sure this is expected... Any suggestion on how to avoid or work around this? > > [2018-09-14 14:46:14.471] root INFO: @@@@@@ Connecting to: 'https://urldefense.proofpoint.com/v2/url?u=http-3A__host1-3A39609_url1&d=DwIBAg&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=bLrGf3qOPfa7FwixFKSI5EuEAlEuxbglrK8414lC4wY&m=kmzwidjXfoCyejfnDKRo7J4AmvdWhFwVMc3SrQ5G24k&s=Jgva48bCRs_t1VOn7OxsyjgTLgIcBsIoFnzP9GHdtBI&e=' > send: 'GET /url1 HTTP/1.1\r\nX-Client-User-Name: uname1\r\nAccept-Encoding: gzip\r\nConnection: close\r\nAccept: application/json\r\nUser-Agent: qz.qzdev.run\r\nHost: host1:39609\r\nX-Client-Host-Name: host2\r\nContent-Type: application/json\r\n\r\n' > reply: 'HTTP/1.1 401 Unauthorized\r\n' > header: Server: nginx/1.14.0 > header: Date: Fri, 14 Sep 2018 18:46:14 GMT > header: Content-Type: text/html > header: Content-Length: 195 > header: Connection: close > header: WWW-Authenticate: Negotiate > header: WWW-Authenticate: Basic realm="" > header: Access-Control-Allow-Credentials: true > send: 'GET /url1 HTTP/1.1\r\nX-Client-User-Name: uname1\r\nAccept-Encoding: gzip\r\nConnection: close\r\nAccept: application/json\r\nUser-Agent: qz.qzdev.run\r\nHost: host1:39609\r\nX-Client-Host-Name: host2\r\nContent-Type: application/json\r\nAuthorization: Negotiate YII........................ AghEw==\r\n\r\n' > reply: 'HTTP/1.1 200 OK\r\n' > header: Server: nginx/1.14.0 > header: Date: Fri, 14 Sep 2018 18:46:14 GMT > header: Content-Type: application/json > header: Content-Length: 430908 > header: Connection: close > header: WWW-Authenticate: Negotiate YI .....gA== > header: WWW-Authenticate: Basic realm="" > header: Set-Cookie: session=ey...ZW4; HttpOnly; Path=/ > header: Access-Control-Allow-Credentials: true > [2018-09-14 14:46:14.779] client_http_auth CRITICAL: GSSAPI failed! It looks like you are trying to use "WWW-Authenticate: Negotiate" AKA Integrated Windows Authentication, AKA NTLM authentication. Unfortunately, this authentication scheme was designed without following HTTP basic concepts, and authenticates a connection instead of requests. As such, this authentication scheme cannot work though a generic HTTP proxy. For NTLM authentication to work though a proxy, it needs to keep connections to the backend server alive and bound to corresponding client connections. The best solution would be to avoid using NTLM authentication for anything more complex than directly connected servers in intranets. If you can't do this for some reason, consider using the "ntlm" directive, which is available as part of our commercial version, see https://urldefense.proofpoint.com/v2/url?u=http-3A__nginx.org_r_ntlm&d=DwIBAg&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=bLrGf3qOPfa7FwixFKSI5EuEAlEuxbglrK8414lC4wY&m=kmzwidjXfoCyejfnDKRo7J4AmvdWhFwVMc3SrQ5G24k&s=V7CbzPjbpkSiNhNIaDR5P5la1fLfM5-MC6MO-KmhKj8&e=. -- Maxim Dounin https://urldefense.proofpoint.com/v2/url?u=http-3A__mdounin.ru_&d=DwIBAg&c=SFszdw3oxIkTvaP4xmzq_apLU3uL-3SxdAPNkldf__Q&r=bLrGf3qOPfa7FwixFKSI5EuEAlEuxbglrK8414lC4wY&m=kmzwidjXfoCyejfnDKRo7J4AmvdWhFwVMc3SrQ5G24k&s=56j1udQaqKDK12PhW-jGwz89_8ZMgUhTZ2tCfJDAaSc&e= ---------------------------------------------------------------------- This message, and any attachments, is for the intended recipient(s) only, may contain information that is privileged, confidential and/or proprietary and subject to important terms and conditions available at http://www.bankofamerica.com/emaildisclaimer. If you are not the intended recipient, please delete this message. From pierre at couderc.eu Mon Sep 17 22:10:22 2018 From: pierre at couderc.eu (Pierre Couderc) Date: Tue, 18 Sep 2018 00:10:22 +0200 Subject: A fatal 301 redirect... Message-ID: <8da4e019-a4a5-16c8-5efd-45f645428e95@couderc.eu> I did use wrongly a 301 redirect.... I have corrected now, but the redirect remains. I use wget : nous at pcouderc:~$ wget https://www.ppp.fr --2018-09-17 23:52:44--? https://www.ppp.fr/ Resolving www.ppp.fr (www.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, 78.234.252.95 Connecting to www.ppp.fr (www.ppp.fr)|2a01:e34:eeaf:c5f0::fee6:854e|:443... connected. HTTP request sent, awaiting response... 301 Moved Permanently Location: https://test.ppp.fr/ [following] --2018-09-17 23:52:44--? https://test.ppp.fr/ Resolving test.ppp.fr (test.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, 78.234.252.95 Connecting to test.ppp.fr (test.ppp.fr)|2a01:e34:eeaf:c5f0::fee6:854e|:443... connected. HTTP request sent, awaiting response... 200 OK Length: unspecified [text/html] Saving to: ?index.html.3?.... In access.log : 2a01:e34:eeaf:c5f0::feb1:b1c9 - - [18/Sep/2018:00:04:34 +0200] "GET / HTTP/2.0" 200 21511 "-" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81 Safari/537.36" 2a01:e34:eeaf:c5f0::feb1:b1c9 - - [18/Sep/2018:00:04:34 +0200] "GET /fuveau.png HTTP/2.0" 404 271 "https://test.ppp.fr/" "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81 Safari/537.36" I am pretty sure that I have removed the fatal redirect, and I have checked that ? the only 301 remaining are on port 80 : ??? location / { ??????? return 301 https://$server_name$request_uri; ??? } So I suppose there is a cache somewhere where nginx keeps its secret and fatal 301 ? How can I remove it ? Thanks PC From jeff.dyke at gmail.com Mon Sep 17 22:20:38 2018 From: jeff.dyke at gmail.com (Jeff Dyke) Date: Mon, 17 Sep 2018 18:20:38 -0400 Subject: A fatal 301 redirect... In-Reply-To: <8da4e019-a4a5-16c8-5efd-45f645428e95@couderc.eu> References: <8da4e019-a4a5-16c8-5efd-45f645428e95@couderc.eu> Message-ID: I think this problem is better solved allowing 80 to be open and a separate server block. Since i terminate from haproxy, from memory something like this, in the same vhost file. Obviously you can listen here on H/2 if you want to as well. server { listen 80 default_server; server_name test.ppp.fr; return 301 https://$server_name$request_uri; } Best, jeff On Mon, Sep 17, 2018 at 6:10 PM Pierre Couderc wrote: > I did use wrongly a 301 redirect.... > > I have corrected now, but the redirect remains. > > I use wget : > > nous at pcouderc:~$ wget https://www.ppp.fr > --2018-09-17 23:52:44-- https://www.ppp.fr/ > Resolving www.ppp.fr (www.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, > 78.234.252.95 > Connecting to www.ppp.fr > (www.ppp.fr)|2a01:e34:eeaf:c5f0::fee6:854e|:443... connected. > HTTP request sent, awaiting response... 301 Moved Permanently > Location: https://test.ppp.fr/ [following] > --2018-09-17 23:52:44-- https://test.ppp.fr/ > Resolving test.ppp.fr (test.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, > 78.234.252.95 > Connecting to test.ppp.fr > (test.ppp.fr)|2a01:e34:eeaf:c5f0::fee6:854e|:443... connected. > HTTP request sent, awaiting response... 200 OK > Length: unspecified [text/html] > Saving to: ?index.html.3?.... > > In access.log : > > 2a01:e34:eeaf:c5f0::feb1:b1c9 - - [18/Sep/2018:00:04:34 +0200] "GET / > HTTP/2.0" 200 21511 "-" "Mozilla/5.0 (X11; Linux x86_64) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81 Safari/537.36" > 2a01:e34:eeaf:c5f0::feb1:b1c9 - - [18/Sep/2018:00:04:34 +0200] "GET > /fuveau.png HTTP/2.0" 404 271 "https://test.ppp.fr/" "Mozilla/5.0 (X11; > Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81 > Safari/537.36" > > > I am pretty sure that I have removed the fatal redirect, and I have > checked that the only 301 remaining are on port 80 : > location / { > return 301 https://$server_name$request_uri; > } > > So I suppose there is a cache somewhere where nginx keeps its secret and > fatal 301 ? How can I remove it ? > > Thanks > > PC > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Tue Sep 18 00:55:32 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Sep 2018 03:55:32 +0300 Subject: A fatal 301 redirect... In-Reply-To: <8da4e019-a4a5-16c8-5efd-45f645428e95@couderc.eu> References: <8da4e019-a4a5-16c8-5efd-45f645428e95@couderc.eu> Message-ID: <20180918005532.GL56558@mdounin.ru> Hello! On Tue, Sep 18, 2018 at 12:10:22AM +0200, Pierre Couderc wrote: > I did use wrongly a 301 redirect.... > > I have corrected now, but the redirect remains. > > I use wget : > > nous at pcouderc:~$ wget https://www.ppp.fr > --2018-09-17 23:52:44--? https://www.ppp.fr/ > Resolving www.ppp.fr (www.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, > 78.234.252.95 > Connecting to www.ppp.fr > (www.ppp.fr)|2a01:e34:eeaf:c5f0::fee6:854e|:443... connected. > HTTP request sent, awaiting response... 301 Moved Permanently > Location: https://test.ppp.fr/ [following] > --2018-09-17 23:52:44--? https://test.ppp.fr/ > Resolving test.ppp.fr (test.ppp.fr)... 2a01:e34:eeaf:c5f0::fee6:854e, > 78.234.252.95 > Connecting to test.ppp.fr > (test.ppp.fr)|2a01:e34:eeaf:c5f0::fee6:854e|:443... connected. > HTTP request sent, awaiting response... 200 OK > Length: unspecified [text/html] > Saving to: ?index.html.3?.... > > In access.log : > > 2a01:e34:eeaf:c5f0::feb1:b1c9 - - [18/Sep/2018:00:04:34 +0200] "GET / > HTTP/2.0" 200 21511 "-" "Mozilla/5.0 (X11; Linux x86_64) > AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81 Safari/537.36" > 2a01:e34:eeaf:c5f0::feb1:b1c9 - - [18/Sep/2018:00:04:34 +0200] "GET > /fuveau.png HTTP/2.0" 404 271 "https://test.ppp.fr/" "Mozilla/5.0 (X11; > Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/69.0.3497.81 > Safari/537.36" > > > I am pretty sure that I have removed the fatal redirect, and I have > checked that ? the only 301 remaining are on port 80 : > ??? location / { > ??????? return 301 https://$server_name$request_uri; > ??? } > > So I suppose there is a cache somewhere where nginx keeps its secret and > fatal 301 ? How can I remove it ? In no particular order: - There are no log lines in the access log coresponding to the requests you've made with wget. This means that either you are connecting to the wrong server (check the IP address) or logging is not properly configured (check your logging configuration). - There are no "secret caches" in nginx. The only caches are ones explicitly configured using coresponding configuration directives - usually proxy_cache for HTTP proxying. - A common mistake is to change configuration without actually reloading it. Make sure to reload the configuration after changes, and make sure to look into error log to find out if it was actually reloaded or the configuration reload failed. - If in doubt, looking into full configuration as show with "nginx -T" might help. If still in doubt, the most advanced (yet low-level) instrument is debug log (http://nginx.org/en/docs/debugging_log.html). -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Tue Sep 18 01:58:42 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Sep 2018 04:58:42 +0300 Subject: WWW-Authenticate in 200 OK response In-Reply-To: <924609531a934ca08dfb4f72abacb437@baml.com> References: <6116053d258a4daca656eda87456ea81@baml.com> <20180914231916.GC56558@mdounin.ru> <924609531a934ca08dfb4f72abacb437@baml.com> Message-ID: <20180918015842.GM56558@mdounin.ru> Hello! On Mon, Sep 17, 2018 at 09:18:51PM +0000, Nica, George via nginx wrote: > Thank you Maxim. > We are using Kerberos, on Linux. And per-request authentication, we are not trying to use session-level authentication. > Would the ntlm module help here? The problem is with "WWW-Authenticate: Negotiate". As it is specified by rfc4559, and it's screwed up: it authenticates a connection, and hence cannot properly work through proxies. See further details here in the RFC, and errata notice for it: https://tools.ietf.org/html/rfc4559#section-6 https://www.rfc-editor.org/errata_search.php?rfc=4559 If you are trying to proxy to a backend which uses "WWW-Authenticate: Negotiate", this is likely the problem you are facing: it cannot work though a proxy. If it's the case, the "ntlm" directive will help. But if you are instead trying to authenticate clients on nginx side, proxying is probably not relevant, see below. > We are already using spnego-http-auth-nginx-module to help with SPNego/GSSAPI. > So our issue/incompatibility seems to be between backend / nginx with spnego-http-auth-nginx-module / client. The first two sending/passing the extra headers on the response and the client getting confused by it. > As you say, nginx is a generic HTTP proxy here, so we will have to figure things out with our server / client / spnego-http-auth-nginx-module. > Are there any other suggested approaches regarding using nginx and Kerberos? If the authentication is expected to happen on nginx side, this can work - assuming the spnego-http-auth-nginx-module does the right thing. Looking at the spnego-http-auth-nginx-module I suspect that both "WWW-Authenticate" headers are generated by the module. Try looking into the module documentation and sources to find out how to use it properly. In particular, docs suggest that auth_gss_allow_basic_fallback off; will disable "WWW-Authenticate: Basic", and it might make your client happy. -- Maxim Dounin http://mdounin.ru/ From pierre at couderc.eu Tue Sep 18 05:10:17 2018 From: pierre at couderc.eu (Pierre Couderc) Date: Tue, 18 Sep 2018 07:10:17 +0200 Subject: A fatal 301 redirect... In-Reply-To: <20180918005532.GL56558@mdounin.ru> References: <8da4e019-a4a5-16c8-5efd-45f645428e95@couderc.eu> <20180918005532.GL56558@mdounin.ru> Message-ID: On 09/18/2018 02:55 AM, Maxim Dounin wrote: > Hello! > > On Tue, Sep 18, 2018 at 12:10:22AM +0200, Pierre Couderc wrote: > >> I did use wrongly a 301 redirect.... >> >> I have corrected now, but the redirect remains. >> >> I >> In no particular order: >> >> - There are no log lines in the access log coresponding to the >> requests you've made with wget. This means that either you are >> connecting to the wrong server (check the IP address) or logging >> is not properly configured (check your logging configuration). Mmm... I double check >> >> - There are no "secret caches" in nginx. The only caches are ones >> explicitly configured using coresponding configuration >> directives - us Thank you, I needed thsi answer. >> ually proxy_cache for HTTP proxying. >> >> - A common mistake is to change configuration without actually >> reloading it. Make sure to reload the configuration after >> changes, and make sure to look into error log to find out if it >> was actually reloaded or the configuration reload failed. Oh, I have well reloaded or restarted? nginx 10 or 100 times !! >> >> - If in doubt, looking into full configuration as show with "nginx >> -T" might help. Oh yes, after painfull experience (all sites stopped!!), I never change a line in config file without testing it thyis way!! >> If still in doubt, the most advanced (yet >> low-level) instrument is debug log >> (http://nginx.org/en/docs/debugging_log.html). Thank you, this is the only way... Thank you to help me to think. PC From anoopalias01 at gmail.com Tue Sep 18 07:55:56 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Tue, 18 Sep 2018 13:25:56 +0530 Subject: Nginx compile with OpenSSL 1.1.1 and DESTDIR= Message-ID: Hi, I am trying to compile nginx 1.15.3 (mainline) with OpenSSL 1.1.1 # ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --with-openssl=./openssl-1.1.1 # make DESTDIR=/opt/test install But this error out with --------------- cc: error: ./openssl-1.1.1/.openssl/lib/libssl.a: No such file or directory cc: error: ./openssl-1.1.1/.openssl/lib/libcrypto.a: No such file or directory make[1]: *** [objs/nginx] Error 1 ------------------ I could find that the openssl-1.1.1/.openssl directory is not created but instead /opt/test/$nginxsrcpath/openssl-1.1.1/.openssl That is if the nginx src is in /root/nginx-1.15.3/ The directory .openssl will be /opt/test/root/nginx-1.15.3/openssl-1.1.1/.openssl/ The make DESTDIR=/opt/test install works fine in nginx 1.13.x with OpenSSL 1.0.2p I am not sure the change is caused by nginx 1.15.3 or openssl-1.1.1 to be honest -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 18 10:02:46 2018 From: nginx-forum at forum.nginx.org (domleb) Date: Tue, 18 Sep 2018 06:02:46 -0400 Subject: No live upstreams with a single upstream Message-ID: <4373ed44485568f2a4a9ca1932d1d19c.NginxMailingListEnglish@forum.nginx.org> While running a load test that injects 10k TPS across 3 Nginx instances, we are seeing spikes of errors where Nginx returns HTTP 502 and logs the message 'no live upstreams while connecting to upstream'. There are no other errors logged e.g. connection errors. Also, we have a single upstream virtual IP (we use iptables to balance load across the backend) and according to the docs the upstream should never be marked as down in this case: 'If there is only a single server in a group, max_fails, fail_timeout and slow_start parameters are ignored, and such a server will never be considered unavailable' Testing locally with our config confirms this and I cannot reproduce the 'no live upstreams while connecting to upstream' message when simulating connection and read errors with a single upstream. To debug I tried enabling debug logs but under load that degraded performance too much. I also traced the worker process with strace and didn't find any socket or other other errors during the 502 spike. I was able to create this issue on Nginx 1.12.2 and 1.15.3. So given that we don't see any source error and we have a single upstream, I'm interested to know what other scenarios could result in a 502 with the log message 'no live upstreams while connecting to upstream'? Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281255,281255#msg-281255 From mdounin at mdounin.ru Tue Sep 18 15:46:07 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 18 Sep 2018 18:46:07 +0300 Subject: No live upstreams with a single upstream In-Reply-To: <4373ed44485568f2a4a9ca1932d1d19c.NginxMailingListEnglish@forum.nginx.org> References: <4373ed44485568f2a4a9ca1932d1d19c.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180918154607.GP56558@mdounin.ru> Hello! On Tue, Sep 18, 2018 at 06:02:46AM -0400, domleb wrote: > While running a load test that injects 10k TPS across 3 Nginx instances, we > are seeing spikes of errors where Nginx returns HTTP 502 and logs the > message 'no live upstreams while connecting to upstream'. There are no > other errors logged e.g. connection errors. > > Also, we have a single upstream virtual IP (we use iptables to balance load > across the backend) and according to the docs the upstream should never be > marked as down in this case: > > 'If there is only a single server in a group, max_fails, fail_timeout and > slow_start parameters are ignored, and such a server will never be > considered unavailable' > > Testing locally with our config confirms this and I cannot reproduce the 'no > live upstreams while connecting to upstream' message when simulating > connection and read errors with a single upstream. > > To debug I tried enabling debug logs but under load that degraded > performance too much. I also traced the worker process with strace and > didn't find any socket or other other errors during the 502 spike. > > I was able to create this issue on Nginx 1.12.2 and 1.15.3. > > So given that we don't see any source error and we have a single upstream, > I'm interested to know what other scenarios could result in a 502 with the > log message 'no live upstreams while connecting to upstream'? Could you please show the upstream configuration you are using? With a single server in the upstream block, "no live upstreams" error may happen if: - the server is marked "down" in the configuration, or - the server reached the max_conns limit. Also note that "a single server" does not apply to cases when there is a single hostname which resolves to multiple IP address (this defines multiple servers at once). -- Maxim Dounin http://mdounin.ru/ From pluknet at nginx.com Tue Sep 18 16:17:45 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Tue, 18 Sep 2018 19:17:45 +0300 Subject: Nginx compile with OpenSSL 1.1.1 and DESTDIR= In-Reply-To: References: Message-ID: <4FE7831A-F2F6-421A-8F0E-AC84B6F525E4@nginx.com> > On 18 Sep 2018, at 10:55, Anoop Alias wrote: > > > Hi, > > I am trying to compile nginx 1.15.3 (mainline) with OpenSSL 1.1.1 > > # ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --with-openssl=./openssl-1.1.1 > > # make DESTDIR=/opt/test install > > But this error out with > --------------- > cc: error: ./openssl-1.1.1/.openssl/lib/libssl.a: No such file or directory > cc: error: ./openssl-1.1.1/.openssl/lib/libcrypto.a: No such file or directory > make[1]: *** [objs/nginx] Error 1 > ------------------ > > I could find that the openssl-1.1.1/.openssl directory is not created but instead > > /opt/test/$nginxsrcpath/openssl-1.1.1/.openssl > > That is if the nginx src is in /root/nginx-1.15.3/ > > The directory .openssl will be /opt/test/root/nginx-1.15.3/openssl-1.1.1/.openssl/ > > The make DESTDIR=/opt/test install works fine in nginx 1.13.x with OpenSSL 1.0.2p > I am not sure the change is caused by nginx 1.15.3 or openssl-1.1.1 to be honest What effect do you expect from DESTDIR? Starting from OpenSSL 1.1.0, it is used there as install prefix. -- Sergey Kandaurov From rjtogy1966 at gmail.com Tue Sep 18 18:24:20 2018 From: rjtogy1966 at gmail.com (Labs Ocozzi) Date: Tue, 18 Sep 2018 15:24:20 -0300 Subject: Osticket With nginx proxy_pass. Message-ID: <23b844e7-a889-76cc-19e8-0a9a24ff2a91@gmail.com> Dears, i need a help. My Osticket (/_support ticket system_/) not work with nginx front-end. The system report the error: _Valid CSRF Token Required_ when login in system, but when i try acess a enviroment across the url(httpd): http://192.168.1.51:8015 the system work fine. Below my configfile in nginx. # ####TICKET ### # ?????? server { ?????? listen?????? 80; ?????? server_name?? ticket.oduvaldocozzi.intranet; ?????? location / { ?????? proxy_pass http://192.168.1.51:8015; ??????? root /opt/www/ticket.oduvaldocozzi.intranet/public_html/upload; ???????????????? } ?????? location ~ \ { ?????? } } -- Att, BR-RJ. Togy Silva Ocozzy e-mail: rjtogy1966 at gmail.com LABS OCOZZI PE. --- Este email foi escaneado pelo Avast antiv?rus. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From fletch at fletchowns.net Wed Sep 19 01:35:56 2018 From: fletch at fletchowns.net (Greg Barker) Date: Tue, 18 Sep 2018 18:35:56 -0700 Subject: 404 error on yum nginx repository Message-ID: Seems like maybe something is broken on the nginx packages download. For example, if I curl this endpoint I get a set of results from 31-Jul-2018 $ curl https://nginx.org/packages/rhel/7/x86_64/repodata/ Index of /packages/rhel/7/x86_64/repodata/

Index of /packages/rhel/7/x86_64/repodata/


../
431ec3b5d8c38dcb75ae6aad1e2044707c26b097cdd032e..>
31-Jul-2018 12:16                8164
4a825688a214584c3c5b655627ca3ef7340a67848b7ab8e..>
31-Jul-2018 12:16               35734
5dd555d317e17e2612f394576dc02326d7c06d18a15c102..>
31-Jul-2018 12:16               47533
76f331ccb392cffc4fa30e0f60985c38fa69c8d953bca55..>
31-Jul-2018 12:16               12976
8f5762215a64dd640a999e379e0fbffddf11c88e631c03b..>
31-Jul-2018 12:16               19018
c84d99af13732489a23f8b73fe9382b2a921e72644d66bd..>
31-Jul-2018 12:16               33396
repomd.xml
 31-Jul-2018 12:16                2986
repomd.xml.asc
     31-Jul-2018 12:16                 473

If I curl again I get a different set of results from 18-Sep-2018: $ curl https://nginx.org/packages/rhel/7/x86_64/repodata/ Index of /packages/rhel/7/x86_64/repodata/

Index of /packages/rhel/7/x86_64/repodata/


../
0d104237ac1bdb053e7b601b4acb01cf249d0eb38842dd4..>
18-Sep-2018 16:27               34060
20335881f90eb6dee86c4e5dde27efe4b280a5ce155f32d..>
18-Sep-2018 16:27               36166
a698a3a644ea96503bace90f338282ae9ff4404820579a8..>
18-Sep-2018 16:27               13207
ce1b99cc8d85853931a96a8b6c3f0270b4330f3034c22fe..>
18-Sep-2018 16:27               48828
db9cdcedd470676185618a343302143eb4a7e5e9da3b996..>
18-Sep-2018 16:27               19211
fa18e6d14529bfa9446c346447ac7ae7135b175d93b5ab4..>
18-Sep-2018 16:27                8282
repomd.xml
 18-Sep-2018 16:27                2986
repomd.xml.asc
     18-Sep-2018 16:27                 473

The result is that you download the repomd.xml from whatever is returning the first set of results, but then you try to download 5dd555d317e17e2612f394576dc02326d7c06d18a15c102b24070efe69a09d06-filelists.sqlite.bz2 but it gets handled by by whatever is returning the second set of results, you get a 404. -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Wed Sep 19 02:41:00 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 19 Sep 2018 08:11:00 +0530 Subject: Nginx compile with OpenSSL 1.1.1 and DESTDIR= In-Reply-To: <4FE7831A-F2F6-421A-8F0E-AC84B6F525E4@nginx.com> References: <4FE7831A-F2F6-421A-8F0E-AC84B6F525E4@nginx.com> Message-ID: Hi, ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --with-openssl=./openssl-1.1.1 make DESTDIR=/opt/test install Did not create the .openssl directory inside the openssl source , but instead, this created the .openssl directory in the DESTDIR I found out that if we use an explicit make command ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --with-openssl=./openssl-1.1.1 make make DESTDIR=/opt/test install This works But the former command without the explicit make used to work on openssl 1.0.xx releases Starting from OpenSSL 1.1.0, it is used there as install prefix. ==> This may be an after effect of this On Tue, Sep 18, 2018 at 9:47 PM Sergey Kandaurov wrote: > > > On 18 Sep 2018, at 10:55, Anoop Alias wrote: > > > > > > Hi, > > > > I am trying to compile nginx 1.15.3 (mainline) with OpenSSL 1.1.1 > > > > # ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --with-openssl=./openssl-1.1.1 > > > > # make DESTDIR=/opt/test install > > > > But this error out with > > --------------- > > cc: error: ./openssl-1.1.1/.openssl/lib/libssl.a: No such file or > directory > > cc: error: ./openssl-1.1.1/.openssl/lib/libcrypto.a: No such file or > directory > > make[1]: *** [objs/nginx] Error 1 > > ------------------ > > > > I could find that the openssl-1.1.1/.openssl directory is not created > but instead > > > > /opt/test/$nginxsrcpath/openssl-1.1.1/.openssl > > > > That is if the nginx src is in /root/nginx-1.15.3/ > > > > The directory .openssl will be > /opt/test/root/nginx-1.15.3/openssl-1.1.1/.openssl/ > > > > The make DESTDIR=/opt/test install works fine in nginx 1.13.x with > OpenSSL 1.0.2p > > I am not sure the change is caused by nginx 1.15.3 or openssl-1.1.1 to > be honest > > What effect do you expect from DESTDIR? > Starting from OpenSSL 1.1.0, it is used there as install prefix. > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 19 06:19:53 2018 From: nginx-forum at forum.nginx.org (winger7) Date: Wed, 19 Sep 2018 02:19:53 -0400 Subject: identifying last request on a tcp connection. Message-ID: <0b8e5a96fc55c2f5fd759382b7d82965.NginxMailingListEnglish@forum.nginx.org> I've been trying to identify the last HTTP request from a buffer of 10 requests sent on a TCP connection in nginx. I've tried to use the header_in field in the ngx_http_request_t by checking if the pos field for the header_in is equal to the last field. This condition holds true twice, once on the first request and then on the last request. My program intends to break out once the last request is identified and hence, has been breaking after encountering the first request because of this check. Can someone help me identify the problem with my approach and point to the right solution? Thanks! Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281287,281287#msg-281287 From postmaster at palvelin.fi Wed Sep 19 08:25:49 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Wed, 19 Sep 2018 11:25:49 +0300 Subject: Deny access to hidden files and directories (and their content) Message-ID: <4EF2363E-CE33-4F91-9B49-857A037D789C@palvelin.fi> I believe my current rexexp match isn?t proper because it?s missing an anchor from the pattern: location ~ /\. { deny all; } What would be more appropriate? Would this work? location ~ /\..*$ -- Palvelin.fi Hostmaster postmaster at palvelin.fi From postmaster at palvelin.fi Wed Sep 19 09:00:43 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Wed, 19 Sep 2018 12:00:43 +0300 Subject: Errors suggesting nginx isn't started as root Message-ID: <186455F3-05A5-4F82-94FB-C852E9582606@palvelin.fi> Why am I getting these log warn/emerg? Running Nginx 1.14.0 on Ubuntu 18.04. root at k2:~# whoami root root at k2:~# service nginx restart root at k2:~# tail /var/log/nginx/error.log 2018/09/19 11:38:47 [warn] 22399#22399: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:21 2018/09/19 11:38:47 [emerg] 22399#22399: SSL_CTX_use_PrivateKey_file("/etc/ssl/private/nginx-selfsigned.key") failed (SSL: error:0200100D:system library:fopen:Permission denied:fopen('/etc/ssl/private/nginx-selfsigned.key','r') error:20074002:BIO routines:file_ctrl:system lib error:140B0002:SSL routines:SSL_CTX_use_PrivateKey_file:system lib) root at k2:~# ls -lh /etc/ssl/private/ |grep nginx -rw-r----- 1 root ssl-cert 1.7K Jul 8 17:12 nginx-selfsigned.key root at k2:~# cat /etc/nginx/nginx.conf |grep ^user user www-data; root at k2:~# ps -auxw |grep nginx root 22317 0.0 0.2 359680 9300 ? Ss 11:38 0:00 nginx: master process /usr/sbin/nginx -g daemon on; master_process on; www-data 22322 0.0 0.3 361980 15356 ? S 11:38 0:00 nginx: worker process www-data 22323 0.2 0.4 362244 18984 ? S 11:38 0:00 nginx: worker process www-data 22326 0.0 0.3 361980 14760 ? S 11:38 0:00 nginx: cache manager process www-data 22327 0.0 0.3 361980 14760 ? S 11:38 0:00 nginx: cache loader process -- Palvelin.fi Hostmaster postmaster at palvelin.fi From thresh at nginx.com Wed Sep 19 09:39:11 2018 From: thresh at nginx.com (Konstantin Pavlov) Date: Wed, 19 Sep 2018 12:39:11 +0300 Subject: 404 error on yum nginx repository In-Reply-To: References: Message-ID: <6ee636e7-8095-5093-e5b4-d53c59babf4e@nginx.com> Hello Greg, 19.09.2018 04:35, Greg Barker wrote: > Seems like maybe something is broken on the nginx packages download. > > For example, if I curl this endpoint I get a set of results from 31-Jul-2018 > > $ curl https://nginx.org/packages/rhel/7/x86_64/repodata/ > ... > The result is that you download the repomd.xml from whatever is > returning the first set of results, but then you try to download > 5dd555d317e17e2612f394576dc02326d7c06d18a15c102b24070efe69a09d06-filelists.sqlite.bz2 > but it gets handled by by whatever is returning the second set of > results, you get a 404. Thanks for reporting the issue - I didnt notice the issue with mirrors we use to serve the packages when uploading the new njs release yesterday. The issue is now fixed. Have a good one, -- Konstantin Pavlov Join us at NGINX Conf 2018, Oct 8-11, Atlanta, GA, USA https://www.nginx.com/nginxconf/2018/ From rjtogy1966 at gmail.com Wed Sep 19 10:44:46 2018 From: rjtogy1966 at gmail.com (Labs Ocozzi) Date: Wed, 19 Sep 2018 07:44:46 -0300 Subject: Understood Diretive Location and Regex (concept question). Message-ID: <5ef4ee25-af33-77fc-2937-a50523c187cd@gmail.com> Dears, in me Lab i have nginx work fine, but i dont understood the diretive location with regex "~ /\. " "~*? \." and "~ \.php$" bellow examples in me enviroment. location ~ /\. { deny all; access_log off; log_not_found off; } location ~ \.php$ { try_files $uri =404; include /etc/nginx/fastcgi_params; fastcgi_pass 127.0.0.1:9000; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; } location ~ /\. { deny all; access_log off; log_not_found off; } -- Att, BR-RJ. Togy Silva Ocozzy e-mail: rjtogy1966 at gmail.com LABS OCOZZI PE. --- Este email foi escaneado pelo Avast antiv?rus. https://www.avast.com/antivirus -------------- next part -------------- An HTML attachment was scrubbed... URL: From anoopalias01 at gmail.com Wed Sep 19 10:51:37 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 19 Sep 2018 16:21:37 +0530 Subject: Understood Diretive Location and Regex (concept question). In-Reply-To: <5ef4ee25-af33-77fc-2937-a50523c187cd@gmail.com> References: <5ef4ee25-af33-77fc-2937-a50523c187cd@gmail.com> Message-ID: location ~ /\. regex location for /. The back slash before dot is just an escape char as dot has special meaning in regex --------------------------------------------------- location ~ \.php$ regex location for anything ending in .php Here again the backslash before dot serve as an escape On Wed, Sep 19, 2018 at 4:15 PM Labs Ocozzi wrote: > Dears, in me Lab i have nginx work fine, but i dont understood the > diretive location with regex "~ /\. " "~* \." and > "~ \.php$" bellow examples in me enviroment. > > > > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > > location ~ \.php$ { > try_files $uri =404; > include /etc/nginx/fastcgi_params; > fastcgi_pass 127.0.0.1:9000; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > } > > > > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > > -- > Att, > BR-RJ. > Togy Silva Ocozzy > e-mail: rjtogy1966 at gmail.com > LABS OCOZZI PE. > > > > ------------------------------ > [image: Avast logo] > > Este email foi escaneado pelo Avast antiv?rus. > www.avast.com > > <#m_577209082744095808_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From pluknet at nginx.com Wed Sep 19 10:54:12 2018 From: pluknet at nginx.com (Sergey Kandaurov) Date: Wed, 19 Sep 2018 13:54:12 +0300 Subject: Nginx compile with OpenSSL 1.1.1 and DESTDIR= In-Reply-To: References: <4FE7831A-F2F6-421A-8F0E-AC84B6F525E4@nginx.com> Message-ID: <518565C7-4C2D-4F8A-B098-4991C70AC7B2@nginx.com> > On 19 Sep 2018, at 05:41, Anoop Alias wrote: > > Hi, > > ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --with-openssl=./openssl-1.1.1 > make DESTDIR=/opt/test install > > Did not create the .openssl directory inside the openssl source , but instead, this created the .openssl directory in the DESTDIR > As expected. > I found out that if we use an explicit make command > ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx --with-openssl=./openssl-1.1.1 > make > make DESTDIR=/opt/test install > > This works > And this is expected too. Just use a separate ``make'' command to not mix nginx's DESTDIR and openssl's DESTDIR means. > But the former command without the explicit make used to work on openssl 1.0.xx releases > > Starting from OpenSSL 1.1.0, it is used there as install prefix. ==> This may be an after effect of this As previously noted. You can also find this note in CHANGES: *) The INSTALL_PREFIX Makefile variable has been renamed to DESTDIR. That makes for less confusion on what this variable is for. Also, the configuration option --install_prefix is removed. [Richard Levitte] -- Sergey Kandaurov From anoopalias01 at gmail.com Wed Sep 19 10:57:45 2018 From: anoopalias01 at gmail.com (Anoop Alias) Date: Wed, 19 Sep 2018 16:27:45 +0530 Subject: Nginx compile with OpenSSL 1.1.1 and DESTDIR= In-Reply-To: <518565C7-4C2D-4F8A-B098-4991C70AC7B2@nginx.com> References: <4FE7831A-F2F6-421A-8F0E-AC84B6F525E4@nginx.com> <518565C7-4C2D-4F8A-B098-4991C70AC7B2@nginx.com> Message-ID: Thanks Sergey On Wed, Sep 19, 2018 at 4:24 PM Sergey Kandaurov wrote: > > > On 19 Sep 2018, at 05:41, Anoop Alias wrote: > > > > Hi, > > > > ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --with-openssl=./openssl-1.1.1 > > make DESTDIR=/opt/test install > > > > Did not create the .openssl directory inside the openssl source , but > instead, this created the .openssl directory in the DESTDIR > > > > As expected. > > > I found out that if we use an explicit make command > > ./configure --prefix=/etc/nginx --sbin-path=/usr/sbin/nginx > --with-openssl=./openssl-1.1.1 > > make > > make DESTDIR=/opt/test install > > > > This works > > > > And this is expected too. > Just use a separate ``make'' command to not mix > nginx's DESTDIR and openssl's DESTDIR means. > > > But the former command without the explicit make used to work on openssl > 1.0.xx releases > > > > Starting from OpenSSL 1.1.0, it is used there as install prefix. ==> > This may be an after effect of this > > As previously noted. You can also find this note in CHANGES: > *) The INSTALL_PREFIX Makefile variable has been renamed to > DESTDIR. That makes for less confusion on what this variable > is for. Also, the configuration option --install_prefix is > removed. > [Richard Levitte] > > > -- > Sergey Kandaurov > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -- *Anoop P Alias* -------------- next part -------------- An HTML attachment was scrubbed... URL: From rjtogy1966 at gmail.com Wed Sep 19 11:08:30 2018 From: rjtogy1966 at gmail.com (Labs Ocozzi) Date: Wed, 19 Sep 2018 08:08:30 -0300 Subject: Understood Diretive Location and Regex (concept question). In-Reply-To: References: <5ef4ee25-af33-77fc-2937-a50523c187cd@gmail.com> Message-ID: <4cbab76f-84c3-6f1b-17b5-1ff4b1682c3a@gmail.com> tks Anoop Em 19/09/2018 07:51, Anoop Alias escreveu: > location ~ /\. > > regex location for /. > The back slash before dot is just an escape char as dot has special > meaning in regex > --------------------------------------------------- > > location ~ \.php$ > > regex location for anything ending in .php > Here again the backslash before dot serve as an escape > > On Wed, Sep 19, 2018 at 4:15 PM Labs Ocozzi > wrote: > > Dears, in me Lab i have nginx work fine, but i dont understood the > diretive location with regex "~ /\. " "~* \." and > "~ \.php$" bellow examples in me enviroment. > > > > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > > location ~ \.php$ { > try_files $uri =404; > include /etc/nginx/fastcgi_params; > fastcgi_pass127.0.0.1:9000 ; > fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; > } > > > > location ~ /\. { > deny all; > access_log off; > log_not_found off; > } > > -- > Att, > BR-RJ. > Togy Silva Ocozzy > e-mail:rjtogy1966 at gmail.com > LABS OCOZZI PE. > > > > ------------------------------------------------------------------------ > Avast logo > > Este email foi escaneado pelo Avast antiv?rus. > www.avast.com > > > <#m_577209082744095808_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > > > > -- > *Anoop P Alias* > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Att, BR-RJ. Togy Silva Ocozzy e-mail: rjtogy1966 at gmail.com LABS OCOZZI PE. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Wed Sep 19 12:22:48 2018 From: nginx-forum at forum.nginx.org (domleb) Date: Wed, 19 Sep 2018 08:22:48 -0400 Subject: No live upstreams with a single upstream In-Reply-To: <20180918154607.GP56558@mdounin.ru> References: <20180918154607.GP56558@mdounin.ru> Message-ID: Maxim Dounin Wrote: ------------------------------------------------------- > Hello! > > On Tue, Sep 18, 2018 at 06:02:46AM -0400, domleb wrote: > > > While running a load test that injects 10k TPS across 3 Nginx > instances, we > > are seeing spikes of errors where Nginx returns HTTP 502 and logs > the > > message 'no live upstreams while connecting to upstream'. There are > no > > other errors logged e.g. connection errors. > > > > Also, we have a single upstream virtual IP (we use iptables to > balance load > > across the backend) and according to the docs the upstream should > never be > > marked as down in this case: > > > > 'If there is only a single server in a group, max_fails, > fail_timeout and > > slow_start parameters are ignored, and such a server will never be > > considered unavailable' > > > > Testing locally with our config confirms this and I cannot reproduce > the 'no > > live upstreams while connecting to upstream' message when simulating > > connection and read errors with a single upstream. > > > > To debug I tried enabling debug logs but under load that degraded > > performance too much. I also traced the worker process with strace > and > > didn't find any socket or other other errors during the 502 spike. > > > > I was able to create this issue on Nginx 1.12.2 and 1.15.3. > > > > So given that we don't see any source error and we have a single > upstream, > > I'm interested to know what other scenarios could result in a 502 > with the > > log message 'no live upstreams while connecting to upstream'? > > Could you please show the upstream configuration you are using? > > With a single server in the upstream block, "no live upstreams" > error may happen if: > > - the server is marked "down" in the configuration, or > - the server reached the max_conns limit. > > Also note that "a single server" does not apply to cases when > there is a single hostname which resolves to multiple IP address > (this defines multiple servers at once). > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx I removed our max_conns limit and that resolved the issue - thanks for the help. I might be worth changing the log message in this case as I believe the upstream is still live and there are no other log messages to indicate what the problem is. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281255,281298#msg-281298 From awasthi.dvk at gmail.com Wed Sep 19 13:10:11 2018 From: awasthi.dvk at gmail.com (Devika Awasthi) Date: Wed, 19 Sep 2018 18:40:11 +0530 Subject: Nginx urgent query In-Reply-To: References: Message-ID: Hi Team, I have a question on Nginx open source. So, we have a nginx web server being used as kubernetes pod in production as reverse proxy and web layer. We wanted to leverage the upstream hash module for session stickiness. We tried a POC locally by recompiling the Nginx with additional module - http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash. and it worked well. However for production and other higher environments wanted to know if we have any nginx docker image/container with above http upstream module please which can pulled? I tried searching in docker hub, couldn?t find any. Any pointers would be highly helpful! Thanks, Devika On Wed, Sep 19, 2018 at 6:29 PM Devika Awasthi wrote: > Hi Team, > > > > I have a question on Nginx open source. > > So, we have a nginx web server being used as kubernetes pod in production > as reverse proxy and web layer. > > We wanted to leverage the upstream hash module for session stickiness. We > tried a POC locally by recompiling the Nginx with additional module - > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash. and it > worked well. > > > > However for production and other higher environments wanted to know if we > have any nginx docker image/container with above http upstream module > please which can pulled? > > I tried searching in docker hub, couldn?t find any. > > > > Any pointers would be highly helpful! > > > > Thanks, > > Devika > -------------- next part -------------- An HTML attachment was scrubbed... URL: From sibin.arsenijevic at gmail.com Wed Sep 19 14:50:45 2018 From: sibin.arsenijevic at gmail.com (Sibin Arsenijevic) Date: Wed, 19 Sep 2018 16:50:45 +0200 Subject: Custom nginx QoS plugin running before SSL handshake Message-ID: <3DA688CE-91E1-494E-9E20-D14598334E4D@gmail.com> Hello everyone, We are using a custom Nginx plugin to tag (setsockopt) response (per domain) traffic from Nginx with QoS DSCP flag and that is working fine, however we are seeing that SSL handshake responses are not getting tagged. Once handshake is done the rest of the responses are tagged as expected. If I am reading development documentation correctly I would somehow need to tag traffic before stage ngx_http_init_connection() which initiates SSL handshake? Is this at all possible from Nginx plugin? If so, can you, please, point me in the right direction? Thank you in advance, Sibin -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: Message signed with OpenPGP URL: From mdounin at mdounin.ru Wed Sep 19 18:08:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Sep 2018 21:08:28 +0300 Subject: identifying last request on a tcp connection. In-Reply-To: <0b8e5a96fc55c2f5fd759382b7d82965.NginxMailingListEnglish@forum.nginx.org> References: <0b8e5a96fc55c2f5fd759382b7d82965.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180919180828.GW56558@mdounin.ru> Hello! On Wed, Sep 19, 2018 at 02:19:53AM -0400, winger7 wrote: > I've been trying to identify the last HTTP request from a buffer of 10 > requests sent on a TCP connection in nginx. I've tried to use the header_in > field in the ngx_http_request_t by checking if the pos field for the > header_in is equal to the last field. This condition holds true twice, once > on the first request and then on the last request. My program intends to > break out once the last request is identified and hence, has been breaking > after encountering the first request because of this check. > > Can someone help me identify the problem with my approach and point to the > right solution? Thanks! Looking into r->header_in is not a right thing to do. It is an internal field used by nginx request parsing. Its content may be different depending on various factors, including buffer sizes configured, TCP connection timing details, and so on. Not to mention that contents of r->header_in will be mostly meaningless when using HTTP/2. If you want to identify 10th request on a connection, consider using the r->connection->requests field instead. It is also available in nginx configuration as the $connection_requests variable, see http://nginx.org/r/$connection_requests. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Sep 19 18:16:00 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Sep 2018 21:16:00 +0300 Subject: Deny access to hidden files and directories (and their content) In-Reply-To: <4EF2363E-CE33-4F91-9B49-857A037D789C@palvelin.fi> References: <4EF2363E-CE33-4F91-9B49-857A037D789C@palvelin.fi> Message-ID: <20180919181600.GX56558@mdounin.ru> Hello! On Wed, Sep 19, 2018 at 11:25:49AM +0300, Palvelin Postmaster via nginx wrote: > I believe my current rexexp match isn?t proper because it?s missing an anchor from the pattern: > > location ~ /\. { > deny all; > } > > What would be more appropriate? Would this work? > > location ~ /\..*$ There is no real difference between these two patterns, as ".*$" in the later one matches any characters till the line end, and hence won't make any difference compared to "/\." matched anywhere in the string. -- Maxim Dounin http://mdounin.ru/ From mdounin at mdounin.ru Wed Sep 19 18:23:13 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 19 Sep 2018 21:23:13 +0300 Subject: Nginx urgent query In-Reply-To: References: Message-ID: <20180919182312.GY56558@mdounin.ru> Hello! On Wed, Sep 19, 2018 at 06:40:11PM +0530, Devika Awasthi wrote: > I have a question on Nginx open source. > > So, we have a nginx web server being used as kubernetes pod in production > as reverse proxy and web layer. > > We wanted to leverage the upstream hash module for session stickiness. We > tried a POC locally by recompiling the Nginx with additional module - > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash. and it > worked well. > > However for production and other higher environments wanted to know if we > have any nginx docker image/container with above http upstream module > please which can pulled? > > I tried searching in docker hub, couldn?t find any. The ngx_http_upstream_module which provides the "hash" directive is compiled in by default, unless explicitly switched off with the "--without-http_upstream_hash_module" configure option. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Wed Sep 19 19:59:58 2018 From: nginx-forum at forum.nginx.org (kpuscas) Date: Wed, 19 Sep 2018 15:59:58 -0400 Subject: 400 errors after upgrading to 1.14.0 Message-ID: Our service uses 2-way ssl with our clients connecting to our systems. With each new client we add their intermediate and root CA chain to the concatenated certificates file used by ssl_client_certificate. We recently upgraded to 1.14.0 (and the included modules) and now some, but not all of our customers are unable to connect getting 400 errors. We've tried changing the order of the certificates in the concatenated file but that didn't help. It is happening across different certificate chains but not all. And all of them worked fine prior to the upgrade. Has anyone else encountered this or is there something we should be doing different in how we set up these certificates? Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281315,281315#msg-281315 From mdounin at mdounin.ru Wed Sep 19 22:12:25 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Sep 2018 01:12:25 +0300 Subject: 400 errors after upgrading to 1.14.0 In-Reply-To: References: Message-ID: <20180919221225.GB56558@mdounin.ru> Hello! On Wed, Sep 19, 2018 at 03:59:58PM -0400, kpuscas wrote: > Our service uses 2-way ssl with our clients connecting to our systems. With > each new client we add their intermediate and root CA chain to the > concatenated certificates file used by ssl_client_certificate. We recently > upgraded to 1.14.0 (and the included modules) and now some, but not all of > our customers are unable to connect getting 400 errors. We've tried changing > the order of the certificates in the concatenated file but that didn't help. > It is happening across different certificate chains but not all. And all of > them worked fine prior to the upgrade. > > Has anyone else encountered this or is there something we should be doing > different in how we set up these certificates? There were no recent changes in nginx related to client certificate validation. On the other hand, there were changes in OpenSSL - most notably, OpenSSL 1.1.0+ now by default rejects MD5-signed certificates and/or certificates with less than 1024-bit RSA keys. This might be the reason for problems you have with some certificates, assuming you've upgraded not only nginx but also switched to a newer OpenSSL library. You may also want to take a look at nginx error logs. When nginx returns a 400 error, it logs the reason to the error log at the "info" level. -- Maxim Dounin http://mdounin.ru/ From awasthi.dvk at gmail.com Thu Sep 20 02:12:31 2018 From: awasthi.dvk at gmail.com (Devika Awasthi) Date: Thu, 20 Sep 2018 07:42:31 +0530 Subject: Nginx urgent query In-Reply-To: <20180919182312.GY56558@mdounin.ru> References: <20180919182312.GY56558@mdounin.ru> Message-ID: I didn't get full reply, Maxim,.. Could you please reply again.. On Wed 19 Sep, 2018, 11:53 PM Maxim Dounin, wrote: > Hello! > > On Wed, Sep 19, 2018 at 06:40:11PM +0530, Devika Awasthi wrote: > > > I have a question on Nginx open source. > > > > So, we have a nginx web server being used as kubernetes pod in > production > > as reverse proxy and web layer. > > > > We wanted to leverage the upstream hash module for session stickiness. We > > tried a POC locally by recompiling the Nginx with additional module - > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash. and it > > worked well. > > > > However for production and other higher environments wanted to know if we > > have any nginx docker image/container with above http upstream module > > please which can pulled? > > > > I tried searching in docker hub, couldn?t find any. > > The ngx_http_upstream_module which provides the "hash" directive > is compiled in by default, unless explicitly switched off with the > "--without-http_upstream_hash_module" configure option. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From awasthi.dvk at gmail.com Thu Sep 20 05:25:26 2018 From: awasthi.dvk at gmail.com (Devika Awasthi) Date: Thu, 20 Sep 2018 10:55:26 +0530 Subject: Nginx urgent query In-Reply-To: <20180919182312.GY56558@mdounin.ru> References: <20180919182312.GY56558@mdounin.ru> Message-ID: Many Thanks Maxim, But we did try to achieve the functionality by using - nginx=1.10.3-r1, we couldn't get request based session stickiness. Basically we want the requests with identical query params to hit the same instance everytime. By recompiling again with this module and using below config it worked: upstream load_balancer { hash $scheme$proxy_host$request_uri$is_args$args consistent; server 127.0.0.1:9003; server 127.0.0.1:9004; } server { listen 8081; server_name localhost; access_log /usr/local/var/log/nginx/access.log; location / { proxy_pass http://load_balancer } Do you think we are missing anything here? Thanks, Devika On Wed, Sep 19, 2018 at 11:53 PM Maxim Dounin wrote: > Hello! > > On Wed, Sep 19, 2018 at 06:40:11PM +0530, Devika Awasthi wrote: > > > I have a question on Nginx open source. > > > > So, we have a nginx web server being used as kubernetes pod in > production > > as reverse proxy and web layer. > > > > We wanted to leverage the upstream hash module for session stickiness. We > > tried a POC locally by recompiling the Nginx with additional module - > > http://nginx.org/en/docs/http/ngx_http_upstream_module.html#hash. and it > > worked well. > > > > However for production and other higher environments wanted to know if we > > have any nginx docker image/container with above http upstream module > > please which can pulled? > > > > I tried searching in docker hub, couldn?t find any. > > The ngx_http_upstream_module which provides the "hash" directive > is compiled in by default, unless explicitly switched off with the > "--without-http_upstream_hash_module" configure option. > > -- > Maxim Dounin > http://mdounin.ru/ > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Thu Sep 20 07:50:14 2018 From: nginx-forum at forum.nginx.org (suman mohanty) Date: Thu, 20 Sep 2018 03:50:14 -0400 Subject: Trouble using nginx tcp proxy In-Reply-To: <9c19ad6c-b8f2-f912-cc41-aaa77f1e4f12@mixmax.com> References: <9c19ad6c-b8f2-f912-cc41-aaa77f1e4f12@mixmax.com> Message-ID: <860b739602d903988b5740199dd6b99f.NginxMailingListEnglish@forum.nginx.org> Hi Swaraj, I have also same setup. and getting same error messgae . Could you please help me to resolve this issue Posted at Nginx Forum: https://forum.nginx.org/read.php?2,270754,281326#msg-281326 From mdounin at mdounin.ru Thu Sep 20 13:14:28 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 20 Sep 2018 16:14:28 +0300 Subject: Nginx urgent query In-Reply-To: References: <20180919182312.GY56558@mdounin.ru> Message-ID: <20180920131427.GF56558@mdounin.ru> Hello! On Thu, Sep 20, 2018 at 10:55:26AM +0530, Devika Awasthi wrote: > Many Thanks Maxim, > > But we did try to achieve the functionality by using - nginx=1.10.3-r1, we > couldn't get request based session stickiness. > Basically we want the requests with identical query params to hit the same > instance everytime. > > By recompiling again with this module and using below config it worked: > > upstream load_balancer { hash $scheme$proxy_host$request_uri$is_args$args > consistent; server 127.0.0.1:9003; server 127.0.0.1:9004; } server { listen > 8081; server_name localhost; access_log /usr/local/var/log/nginx/access.log; > location / { proxy_pass http://load_balancer } > Do you think we are missing anything here? First of all you may want to define how do you expect it to work and how do you test if it worked or not. It also not clear what do you mean by "recompiling again with this module". As I already wrote, the module is compiled in by default. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Thu Sep 20 13:44:08 2018 From: nginx-forum at forum.nginx.org (itpp2012) Date: Thu, 20 Sep 2018 09:44:08 -0400 Subject: Nginx urgent query In-Reply-To: References: Message-ID: <553cf282c630a6f1a3acc9ef48378b93.NginxMailingListEnglish@forum.nginx.org> Devika Awasthi Wrote: ------------------------------------------------------- > Many Thanks Maxim, > > But we did try to achieve the functionality by using - > nginx=1.10.3-r1, we > couldn't get request based session stickiness. > Basically we want the requests with identical query params to hit the > same > instance everytime. Maybe this can help; https://bitbucket.org/nginx-goodies/nginx-sticky-module-ng Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281299,281329#msg-281329 From nginx-forum at forum.nginx.org Thu Sep 20 19:51:41 2018 From: nginx-forum at forum.nginx.org (kpuscas) Date: Thu, 20 Sep 2018 15:51:41 -0400 Subject: 400 errors after upgrading to 1.14.0 In-Reply-To: <20180919221225.GB56558@mdounin.ru> References: <20180919221225.GB56558@mdounin.ru> Message-ID: Thanks Maxim, Looks like the issue was a bad root cert in the chain. The CN was identical to what the intermediate called out but it wasn't the one that had issued the intermediate. Also didn't know that setting error to info would give us the ssl error information. We had it set to debug but couldn't figure anything out from that. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281315,281332#msg-281332 From stefan.mueller.83 at gmail.com Fri Sep 21 06:34:46 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Fri, 21 Sep 2018 08:34:46 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN Message-ID: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> An HTML attachment was scrubbed... URL: From vbart at nginx.com Fri Sep 21 14:05:41 2018 From: vbart at nginx.com (Valentin V. Bartenev) Date: Fri, 21 Sep 2018 17:05:41 +0300 Subject: Unit 1.4 release Message-ID: <1957084.4ZI9o1DlG4@vbart-workstation> Hello, I'm glad to announce a new release of NGINX Unit. The key feature of the new version is dynamically configurable TLS support with certificate storage API that provides detailed information about your certificate chains, including common and alternative names as well as expiration dates. See the documentation for details: - https://unit.nginx.org/configuration/#ssl-tls-and-certificates This is just our first step in TLS support. More configuration options and various TLS-related features will be added in the future. Full-featured HTTP/2 support is also in our sights. Changes with Unit 1.4 20 Sep 2018 *) Change: the control API maps the configuration object only at "/config/". *) Feature: TLS support for client connections. *) Feature: TLS certificates storage control API. *) Feature: Unit library (libunit) to streamline language module integration. *) Feature: "408 Request Timeout" responses while closing HTTP keep-alive connections. *) Feature: improvements in OpenBSD support. Thanks to David Carlier. *) Bugfix: a segmentation fault might have occurred after reconfiguration. *) Bugfix: building on systems with non-default locale might be broken. *) Bugfix: "header_read_timeout" might not work properly. *) Bugfix: header fields values with non-ASCII bytes might be handled incorrectly in Python 3 module. In a few weeks, we are going to add preliminary Node.js support. It's almost ready; our QA engineers are already testing it. Now we are also working on Java module, WebSockets support, flexible request routing, and serving of static media assets. Please also welcome Artem Konev, who joined our team as a technical writer. He has already started improving documentation on the website and updated it with the configuration options currently available: - https://hg.nginx.org/unit-docs/ Of course, the website still leaves much to be desired, so Artem will strive to provide industry-grade documentation for Unit. You are welcome to join this effort with your ideas, suggestions, and edits: just send a pull request or open an issue in our documentation repository on GitHub: - https://github.com/nginx/unit-docs/ Stay tuned! wbr, Valentin V. Bartenev From nginx-forum at forum.nginx.org Fri Sep 21 14:38:58 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Fri, 21 Sep 2018 10:38:58 -0400 Subject: GeoIP2 Maxmind Module Support for Nginx Message-ID: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> Hi , As of now we are using "nginx-module-geoip-1.10.0-1.el7.ngx.x86_64.rpm" available at repository https://nginx.org/packages/rhel/7/x86_64/RPMS/ Cant find rpm for geoip2 module . Please suggest from were to get the rpm package of geoip2 module as we are using nginx-1-10.2 rpm. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281341,281341#msg-281341 From gfrankliu at gmail.com Fri Sep 21 15:00:53 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 21 Sep 2018 15:00:53 +0000 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> References: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: nginx doesn't officially support geoip2. You have to use third party modules like https://github.com/leev/ngx_http_geoip2_module On Fri, Sep 21, 2018 at 2:39 PM anish10dec wrote: > Hi , > > As of now we are using "nginx-module-geoip-1.10.0-1.el7.ngx.x86_64.rpm" > available at repository > https://nginx.org/packages/rhel/7/x86_64/RPMS/ > > Cant find rpm for geoip2 module . > > Please suggest from were to get the rpm package of geoip2 module as we are > using nginx-1-10.2 rpm. > > Posted at Nginx Forum: > https://forum.nginx.org/read.php?2,281341,281341#msg-281341 > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From rainer at ultra-secure.de Fri Sep 21 15:29:39 2018 From: rainer at ultra-secure.de (rainer at ultra-secure.de) Date: Fri, 21 Sep 2018 17:29:39 +0200 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: References: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <9d292a2a4623454a8c9fd70be9c4b134@ultra-secure.de> Am 2018-09-21 17:00, schrieb Frank Liu: > nginx doesn't officially support geoip2. You have to use third party > modules like https://github.com/leev/ngx_http_geoip2_module NGINX Plus does, though: https://www.nginx.com/products/nginx/modules/geoip2/ "Support details: Supported by NGINX, Inc. for active NGINX Plus subscribers" From gfrankliu at gmail.com Fri Sep 21 16:23:26 2018 From: gfrankliu at gmail.com (Frank Liu) Date: Fri, 21 Sep 2018 16:23:26 +0000 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: <9d292a2a4623454a8c9fd70be9c4b134@ultra-secure.de> References: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> <9d292a2a4623454a8c9fd70be9c4b134@ultra-secure.de> Message-ID: If you click the link in step 4 on the page you mentioned, it goes to the same site in my earlier email. On Fri, Sep 21, 2018 at 3:30 PM wrote: > Am 2018-09-21 17:00, schrieb Frank Liu: > > nginx doesn't officially support geoip2. You have to use third party > > modules like https://github.com/leev/ngx_http_geoip2_module > > > NGINX Plus does, though: > > https://www.nginx.com/products/nginx/modules/geoip2/ > > "Support details: Supported by NGINX, Inc. for active NGINX Plus > subscribers" > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Sat Sep 22 00:17:00 2018 From: nginx-forum at forum.nginx.org (hostcanada2020) Date: Fri, 21 Sep 2018 20:17:00 -0400 Subject: GeoIP2 Maxmind Module Support for Nginx In-Reply-To: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> References: <6f4b2e7bc618189784ac5561781375c0.NginxMailingListEnglish@forum.nginx.org> Message-ID: <3f55ca9764b07d02fa15ec2ecc629cf1.NginxMailingListEnglish@forum.nginx.org> If the GeoIP2 is not working, you can try to install the IP2Locatoin Nginx using the tutorial below. https://www.ip2location.com/tutorials/how-to-use-ip2location-geolocation-with-nginx Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281341,281352#msg-281352 From nginx-forum at forum.nginx.org Sun Sep 23 18:37:17 2018 From: nginx-forum at forum.nginx.org (ec2geek007) Date: Sun, 23 Sep 2018 14:37:17 -0400 Subject: 1.15.3 nginx core dumps -- with proxy_pass http://127.0.0.1:8080 Message-ID: <6ddbd683294c81833f86480897bfa198.NginxMailingListEnglish@forum.nginx.org> Any help much appreciated. This is my last closure step to move from apache to nginx (using tomcat with proxy_pass.) Thanks in advance! curl -v http://localhost:8080 or curl -v http://127.0.0.1:8080 work. If I remove proxy_pass statement, it works but goes to normal index.html /usr/local/nginx/sbin/nginx -V nginx version: nginx/1.15.3 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) built with OpenSSL 1.1.0i 14 Aug 2018 TLS SNI support enabled configure arguments: --add-module=../modsecurity-2.9.2/nginx/modsecurity --with-cc-opt=-fno-optimize-sibling-calls Here is the core dump [Thread debugging using libthread_db enabled] Using host libthread_db library "/lib64/libthread_db.so.1". Core was generated by `nginx: worker process '. Program terminated with signal 11, Segmentation fault. #0 ngx_http_upstream_copy_allow_ranges (r=0xad66d0, h=0x7ffc3003a970, offset=) at src/http/ngx_http_upstream.c:5176 5176 if (r->upstream->conf->force_ranges) { Missing separate debuginfos, use: debuginfo-install GeoIP-1.5.0-11.el7.x86_64 apr-1.4.8-3.el7_4.1.x86_64 apr-util-1.5.2-6.el7.x86_64 expat-2.1.0-10.el7_3.x86_64 glibc-2.17-222.el7.x86_64 libdb-5.3.21-24.el7.x86_64 libuuid-2.23.2-52.el7_5.1.x86_64 libxml2-2.9.1-6.el7_2.3.x86_64 nss-softokn-freebl-3.36.0-5.el7_5.x86_64 pcre-8.32-17.el7.x86_64 xz-libs-5.2.2-1.el7.x86_64 zlib-1.2.7-17.el7.x86_64 (gdb) backtrace #0 ngx_http_upstream_copy_allow_ranges (r=0xad66d0, h=0x7ffc3003a970, offset=) at src/http/ngx_http_upstream.c:5176 #1 0x00000000004c5f66 in ngx_http_modsecurity_save_headers_out_visitor (data=0xad66d0, key=, value=) at ../modsecurity-2.9.2/nginx/modsecurity/ngx_http_modsecurity.c:795 #2 0x00007fd2b216c1ad in apr_table_vdo () from /lib64/libapr-1.so.0 #3 0x00007fd2b216c26f in apr_table_do () from /lib64/libapr-1.so.0 #4 0x00000000004c7898 in ngx_http_modsecurity_save_headers_out (r=0xad66d0) at ../modsecurity-2.9.2/nginx/modsecurity/ngx_http_modsecurity.c:737 #5 ngx_http_modsecurity_body_filter (r=, in=) at ../modsecurity-2.9.2/nginx/modsecurity/ngx_http_modsecurity.c:1220 #6 0x000000000045843a in ngx_output_chain (ctx=ctx at entry=0xad8520, in=in at entry=0x7ffc3003ad00) at src/core/ngx_output_chain.c:74 #7 0x00000000004b15ad in ngx_http_copy_filter (r=0xad66d0, in=0x7ffc3003ad00) at src/http/ngx_http_copy_filter_module.c:152 #8 0x00000000004a8863 in ngx_http_range_body_filter (r=0xad66d0, in=) at src/http/modules/ngx_http_range_filter_module.c:635 #9 0x0000000000489360 in ngx_http_output_filter (r=r at entry=0xad66d0, in=in at entry=0x7ffc3003ad00) at src/http/ngx_http_core_module.c:1770 #10 0x000000000048ca94 in ngx_http_send_special (r=r at entry=0xad66d0, flags=flags at entry=1) at src/http/ngx_http_request.c:3386 #11 0x000000000049bf13 in ngx_http_upstream_finalize_request (r=r at entry=0xad66d0, u=u at entry=0xad7a70, rc=, rc at entry=0) at src/http/ngx_http_upstream.c:4436 #12 0x000000000049cc0d in ngx_http_upstream_process_request (r=r at entry=0xad66d0, u=u at entry=0xad7a70) at src/http/ngx_http_upstream.c:4007 #13 0x000000000049cdd1 in ngx_http_upstream_process_upstream (r=r at entry=0xad66d0, u=u at entry=0xad7a70) at src/http/ngx_http_upstream.c:3919 #14 0x000000000049e8cd in ngx_http_upstream_send_response (u=0xad7a70, r=0xad66d0) at src/http/ngx_http_upstream.c:3232 #15 ngx_http_upstream_process_header (r=0xad66d0, u=0xad7a70) at src/http/ngx_http_upstream.c:2429 #16 0x000000000049bf92 in ngx_http_upstream_handler (ev=) at src/http/ngx_http_upstream.c:1281 #17 0x000000000047a914 in ngx_epoll_process_events (cycle=, timer=, flags=) at src/event/modules/ngx_epoll_module.c:902 #18 0x0000000000471e39 in ngx_process_events_and_timers (cycle=cycle at entry=0xad26c0) at src/event/ngx_event.c:242 #19 0x0000000000478cf4 in ngx_worker_process_cycle (cycle=0xad26c0, data=) at src/os/unix/ngx_process_cycle.c:750 #20 0x0000000000477432 in ngx_spawn_process (cycle=cycle at entry=0xad26c0, proc=proc at entry=0x478c83 , data=data at entry=0x0, name=name at entry=0x6a9fe5 "worker process", respawn=respawn at entry=-3) at src/os/unix/ngx_process.c:199 #21 0x0000000000477fbb in ngx_start_worker_processes (cycle=cycle at entry=0xad26c0, n=1, type=type at entry=-3) at src/os/unix/ngx_process_cycle.c:359 #22 0x00000000004793f1 in ngx_master_process_cycle (cycle=cycle at entry=0xad26c0) at src/os/unix/ngx_process_cycle.c:131 #23 0x00000000004548e5 in main (argc=, argv=) at src/core/nginx.c:382 (gdb) Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281362,281362#msg-281362 From maxim at nginx.com Mon Sep 24 08:49:09 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 24 Sep 2018 11:49:09 +0300 Subject: 1.15.3 nginx core dumps -- with proxy_pass http://127.0.0.1:8080 In-Reply-To: <6ddbd683294c81833f86480897bfa198.NginxMailingListEnglish@forum.nginx.org> References: <6ddbd683294c81833f86480897bfa198.NginxMailingListEnglish@forum.nginx.org> Message-ID: Hello, On 23/09/2018 21:37, ec2geek007 wrote: > Any help much appreciated. This is my last closure step to move from apache > to nginx (using tomcat with proxy_pass.) Thanks in advance! > > curl -v http://localhost:8080 or curl -v http://127.0.0.1:8080 work. If I > remove proxy_pass statement, it works but goes to normal index.html > > /usr/local/nginx/sbin/nginx -V > nginx version: nginx/1.15.3 > built by gcc 4.8.5 20150623 (Red Hat 4.8.5-28) (GCC) > built with OpenSSL 1.1.0i 14 Aug 2018 > TLS SNI support enabled > configure arguments: --add-module=../modsecurity-2.9.2/nginx/modsecurity > --with-cc-opt=-fno-optimize-sibling-calls > > [...] modsecurity-2.9.2 is a culprit. Try to remove it from your build or use modsec-3 instead https://github.com/SpiderLabs/ModSecurity/tree/v3/master -- Maxim Konovalov From nginx-forum at forum.nginx.org Mon Sep 24 09:32:36 2018 From: nginx-forum at forum.nginx.org (ec2geek007) Date: Mon, 24 Sep 2018 05:32:36 -0400 Subject: 1.15.3 nginx core dumps -- with proxy_pass http://127.0.0.1:8080 In-Reply-To: References: Message-ID: Thanks Maxim. However I also see https://github.com/SpiderLabs/ModSecurity/issues/1697 Any one confirmed that a particular version in 3 modesecurity works? Seems 3.0.4 modsecurity (which is not yet there) is most likely address all known issues so far. Hope I am wrong and a good version is out there. Good day -EC Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281362,281367#msg-281367 From maxim at nginx.com Mon Sep 24 09:40:34 2018 From: maxim at nginx.com (Maxim Konovalov) Date: Mon, 24 Sep 2018 12:40:34 +0300 Subject: 1.15.3 nginx core dumps -- with proxy_pass http://127.0.0.1:8080 In-Reply-To: References: Message-ID: <3076becc-d8d8-2946-7b33-4f6937bd30a2@nginx.com> On 24/09/2018 12:32, ec2geek007 wrote: > Thanks Maxim. > However I also see > https://github.com/SpiderLabs/ModSecurity/issues/1697 > > Any one confirmed that a particular version in 3 modesecurity works? > Seems 3.0.4 modsecurity (which is not yet there) is most likely address all > known issues so far. There is a handful of outstanding bugs in modsec-3 and it is hard to imagine that they all will be fixed in 3.0.4. However, this is the only viable solution if you have to use modsecurity. Also, it is not clear whether your config is affected by the bug you mentioned. Hopefully not, check for this comment from Andrey https://github.com/SpiderLabs/ModSecurity/issues/1697#issuecomment-382741141 -- Maxim Konovalov From nginx-forum at forum.nginx.org Mon Sep 24 13:53:26 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Mon, 24 Sep 2018 09:53:26 -0400 Subject: Enabling "Transfer-Encoding : chunked" Message-ID: In order to support CMAF and Low latency for HLS streaming through Nginx, it is required change in content header. Instead of "Content-Length" in Header , expected value by player is "Transfer-Encoding : chunked" so that for a 6 sec chunk of media segment player will start streaming fetching data in 200 msec part wise and thus streaming will have low latency . This is supported by HTTP 1.1 Tried below parameter to enable same in Nginx Configuration chunked_transfer_encoding on; But its not adding the same in header. Please suggest better way to do it. https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88 Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281371,281371#msg-281371 From mdounin at mdounin.ru Mon Sep 24 15:04:39 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Sep 2018 18:04:39 +0300 Subject: Enabling "Transfer-Encoding : chunked" In-Reply-To: References: Message-ID: <20180924150439.GW56558@mdounin.ru> Hello! On Mon, Sep 24, 2018 at 09:53:26AM -0400, anish10dec wrote: > In order to support CMAF and Low latency for HLS streaming through Nginx, it > is required change in content header. > > Instead of "Content-Length" in Header , expected value by player is > "Transfer-Encoding : chunked" so that for a 6 sec chunk of media segment > player will start streaming fetching data in 200 msec part wise and thus > streaming will have low latency . This is supported by HTTP 1.1 > > Tried below parameter to enable same in Nginx Configuration > chunked_transfer_encoding on; > > But its not adding the same in header. > > Please suggest better way to do it. > https://gist.github.com/CMCDragonkai/6bfade6431e9ffb7fe88 The text you are referring to is misleading. There is no difference between "Content-Length" and "Transfer-Encoding: chunked" from the streaming point of view, except that with "Content-Length" the client knows expected full response size in advance. Nothing stops the client from rendering responses with "Content-Length" once data arrives. If your client for some reason requires "Transfer-Encoding: chunked", it looks like a bug and/or misfeature of the particular client. The only case when it makes sense to use "Transfer-Encoding: chunked" is when the full response length is not known in advance, and hence "Content-Length" cannot be used. As for low-latency HLS streaming, the key part is that "Transfer-Encoding: chunked" is used by the encoder to return already available parts of the currently-being-produced HLS segment. As the segment is not yet complete, its full length is not known and hence "Content-Length" cannot be used in the response. For this to work, you'll need appropriate support in your HLS encoder - that is, it needs to return the last segment via HTTP while the segment is being produced. If nginx is used to proxy such requests, everything is expected to work out of the box - nginx will use "Transfer-Encoding: chunked" as the length of the response is not known. -- Maxim Dounin http://mdounin.ru/ From nginx-forum at forum.nginx.org Mon Sep 24 15:40:27 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Mon, 24 Sep 2018 11:40:27 -0400 Subject: Enabling "Transfer-Encoding : chunked" In-Reply-To: <20180924150439.GW56558@mdounin.ru> References: <20180924150439.GW56558@mdounin.ru> Message-ID: <87ae8f6ef21391fe60f26b6530c2fd3b.NginxMailingListEnglish@forum.nginx.org> Thanks Maxim For Streaming with Low Latency , Harmonic Encoder is pushing media files with "Transfer-Encoding: chunked" on the Nginx Origin Server. We are able to see the same in tcpdump between Encoder and Nginx Origin. However when we try to stream content through Origin Server , "Transfer-Encoding: chunked" is missing in the header part because of which player is not able to start stream with enabling low latency Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281371,281374#msg-281374 From mdounin at mdounin.ru Mon Sep 24 16:11:06 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Mon, 24 Sep 2018 19:11:06 +0300 Subject: Enabling "Transfer-Encoding : chunked" In-Reply-To: <87ae8f6ef21391fe60f26b6530c2fd3b.NginxMailingListEnglish@forum.nginx.org> References: <20180924150439.GW56558@mdounin.ru> <87ae8f6ef21391fe60f26b6530c2fd3b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180924161106.GX56558@mdounin.ru> Hello! On Mon, Sep 24, 2018 at 11:40:27AM -0400, anish10dec wrote: > Thanks Maxim > > For Streaming with Low Latency , Harmonic Encoder is pushing media files > with "Transfer-Encoding: chunked" on the Nginx Origin Server. > > We are able to see the same in tcpdump between Encoder and Nginx Origin. Ok, so everything works as intended when using proxying, right? > However when we try to stream content through Origin Server , > "Transfer-Encoding: chunked" is missing in the header part because of which > player is not able to start stream with enabling low latency >From your description it is not clear what you are trying to do here. If you are trying to save HLS encoding results to disk and then serve them using nginx as static files, then it is not going to work with HLS low-latency streaming - because nginx does not know if a particular segment file is complete, or it is being written right now and anything added to the file needs to be sent to the client till some unspecified moment in the future. If you want low latency live HLS streaming to work, you'll have to use proxying at least for the last segment (the one which is being written to). If you observe problems with already completed segments (that is, segments which are fully complete, and their length already known) once they are served with Content-Length, this is probably something to be addressed in the client. As previously explained, there is no difference between "Content-Length" and "Transfer-Encoding: chunked" if full length of a response is known in advance. -- Maxim Dounin http://mdounin.ru/ From brian at brianwhalen.net Mon Sep 24 16:36:10 2018 From: brian at brianwhalen.net (Brian W.) Date: Mon, 24 Sep 2018 09:36:10 -0700 Subject: Nginx with windows auth Message-ID: Is this possible in the free version or only paid products? Brian -------------- next part -------------- An HTML attachment was scrubbed... URL: From brian at brianwhalen.net Mon Sep 24 19:50:09 2018 From: brian at brianwhalen.net (Brian W.) Date: Mon, 24 Sep 2018 12:50:09 -0700 Subject: Nginx with windows auth In-Reply-To: References: Message-ID: I saw a post this morning claiming that only paid versions supported it. I did get it to work; it wasn't initially obvious that the ldap auth conf file was to replace the nginx.conf file and not be an addition. On Mon, Sep 24, 2018, 9:36 AM Brian W. wrote: > Is this possible in the free version or only paid products? > > Brian > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.Whittington at equifax.com Mon Sep 24 20:55:46 2018 From: Jason.Whittington at equifax.com (Jason Whittington) Date: Mon, 24 Sep 2018 20:55:46 +0000 Subject: [IE] Re: Nginx with windows auth In-Reply-To: References: Message-ID: <995C5C9AD54A3C419AF1C20A8B6AB9A4341619E7@STLEISEXCMBX3.eis.equifax.com> When you saw ?Windows Auth? did you mean NTLM? From: nginx [mailto:nginx-bounces at nginx.org] On Behalf Of Brian W. Sent: Monday, September 24, 2018 2:50 PM To: nginx at nginx.org Subject: [IE] Re: Nginx with windows auth I saw a post this morning claiming that only paid versions supported it. I did get it to work; it wasn't initially obvious that the ldap auth conf file was to replace the nginx.conf file and not be an addition. On Mon, Sep 24, 2018, 9:36 AM Brian W. > wrote: Is this possible in the free version or only paid products? Brian This message contains proprietary information from Equifax which may be confidential. If you are not an intended recipient, please refrain from any disclosure, copying, distribution or use of this information and note that such actions are prohibited. If you have received this transmission in error, please notify by e-mail postmaster at equifax.com. Equifax? is a registered trademark of Equifax Inc. All rights reserved. -------------- next part -------------- An HTML attachment was scrubbed... URL: From nginx-forum at forum.nginx.org Tue Sep 25 00:14:42 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Mon, 24 Sep 2018 20:14:42 -0400 Subject: Configuration problem: request default 15.15.15.15/ not working Message-ID: I'm testing Nginx with a django application, the requests https://15.15.15.15/admin/ and https://15.15.15.15/inicio/ work correctly, but https://15.15.15.15/ throw error "Not found: The requested URL / was not found on this server. ", And I can not find the error. This is the configuration: server{ #Configuracion SSL listen 443 ssl http2; listen [::]:443 ssl http2; server_name 15.15.15.15; include snippets/self-signed.conf; include snippets/ssl-params.conf; #root /home/gela/LibrosWeb; location = /favicon.ico { access_log off; log_not_found off; } location /robots.txt { alias /var/www/LibrosWeb/robots.txt ;} location / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } location /static { alias /home/gela/LibrosWeb/static; } } server{ #Configuracion http listen 80; listen [::]:80; server_name 15.15.15.15; return 301 https://$server_name$request_uri; } and urls.py django: urlpatterns = [ path('', RedirectView.as_view(url='/inicio/', permanent=True)), path('inicio/', include('inicio.urls')), path('admin/', admin.site.urls), ] Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281393,281393#msg-281393 From r at roze.lv Tue Sep 25 08:14:15 2018 From: r at roze.lv (Reinis Rozitis) Date: Tue, 25 Sep 2018 11:14:15 +0300 Subject: Configuration problem: request default 15.15.15.15/ not working In-Reply-To: References: Message-ID: <000a01d454a7$bffa98d0$3fefca70$@roze.lv> > but https://15.15.15.15/ throw error "Not found: The requested URL / was not > found on this server. ", And I can not find the error. This is the > configuration: > and urls.py django: > > urlpatterns = [ > path('', RedirectView.as_view(url='/inicio/', permanent=True)), > path('inicio/', include('inicio.urls')), > path('admin/', admin.site.urls), > ] The error is coming from your py script and it actually tells what is the problem - the '/' url is not handled, so you probably need to add: path('/', RedirectView.as_view(url='/inicio/', permanent=True)), or location = / {} redirect in nginx. rr From postmaster at palvelin.fi Tue Sep 25 13:35:56 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Tue, 25 Sep 2018 16:35:56 +0300 Subject: Cache only one specific query string pattern Message-ID: I use Wordpress? REST API post feed to embed articles on an external site. My articles are updated fairly seldom, so there?s probably no need to dynamically compile every request response. I?m thinking of using fastcgi cache to cache the feed. I?m currently skipping caching for all requests with a $query_string. However, the REST API URL?s also contain a query string and thus don?t currently get cached. if ($query_string != "") { set $skip_cache 1; } Is it possible to cache the REST API URL?s but skip cache for all other URL?s containing a $query_string? -- Palvelin.fi Hostmaster postmaster at palvelin.fi From postmaster at palvelin.fi Tue Sep 25 15:21:32 2018 From: postmaster at palvelin.fi (Palvelin Postmaster) Date: Tue, 25 Sep 2018 18:21:32 +0300 Subject: Cache only one specific query string pattern In-Reply-To: References: Message-ID: > On 25 Sep 2018, at 16:35, Palvelin Postmaster via nginx wrote: > > I use Wordpress? REST API post feed to embed articles on an external site. My articles are updated fairly seldom, so there?s probably no need to dynamically compile every request response. I?m thinking of using fastcgi cache to cache the feed. > > I?m currently skipping caching for all requests with a $query_string. However, the REST API URL?s also contain a query string and thus don?t currently get cached. > > if ($query_string != "") { > set $skip_cache 1; > } > > Is it possible to cache the REST API URL?s but skip cache for all other URL?s containing a $query_string? In the absense of better suggestions, this seems to work. :) # Wordpress-specific: URLs with a query string shouldn't be cached except when REST API if ($request_uri ~ "^/wp-json/wp/v2/posts/.*") { set $cache_restapi "CACHE"; } if ($query_string != "") { set $cache_restapi "${cache_restapi}NOT"; } if ($cache_restapi = "NOT") { set $skip_cache 1; } -- Palvelin.fi Hostmaster postmaster at palvelin.fi From mdounin at mdounin.ru Tue Sep 25 15:25:16 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Tue, 25 Sep 2018 18:25:16 +0300 Subject: nginx-1.15.4 Message-ID: <20180925152515.GF56558@mdounin.ru> Changes with nginx 1.15.4 25 Sep 2018 *) Feature: now the "ssl_early_data" directive can be used with OpenSSL. *) Bugfix: in the ngx_http_uwsgi_module. Thanks to Chris Caputo. *) Bugfix: connections with some gRPC backends might not be cached when using the "keepalive" directive. *) Bugfix: a socket leak might occur when using the "error_page" directive to redirect early request processing errors, notably errors with code 400. *) Bugfix: the "return" directive did not change the response code when returning errors if the request was redirected by the "error_page" directive. *) Bugfix: standard error pages and responses of the ngx_http_autoindex_module module used the "bgcolor" attribute, and might be displayed incorrectly when using custom color settings in browsers. Thanks to Nova DasSarma. *) Change: the logging level of the "no suitable key share" and "no suitable signature algorithm" SSL errors has been lowered from "crit" to "info". -- Maxim Dounin http://nginx.org/ From kworthington at gmail.com Tue Sep 25 15:57:29 2018 From: kworthington at gmail.com (Kevin Worthington) Date: Tue, 25 Sep 2018 11:57:29 -0400 Subject: [nginx-announce] nginx-1.15.4 In-Reply-To: <20180925152522.GG56558@mdounin.ru> References: <20180925152522.GG56558@mdounin.ru> Message-ID: Hello Nginx users, Now available: Nginx 1.15.4 for Windows https://kevinworthington.com/ nginxwin1154 (32-bit and 64-bit versions) These versions are to support legacy users who are already using Cygwin based builds of Nginx. Officially supported native Windows binaries are at nginx.org. Announcements are also available here: Twitter http://twitter.com/kworthington Google+ https://plus.google.com/+KevinWorthington/ Thank you, Kevin -- Kevin Worthington kworthington *@* (gmail] [dot} {com) https://kevinworthington.com/ https://twitter.com/kworthington https://plus.google.com/+KevinWorthington/ On Tue, Sep 25, 2018 at 11:25 AM, Maxim Dounin wrote: > Changes with nginx 1.15.4 25 Sep > 2018 > > *) Feature: now the "ssl_early_data" directive can be used with > OpenSSL. > > *) Bugfix: in the ngx_http_uwsgi_module. > Thanks to Chris Caputo. > > *) Bugfix: connections with some gRPC backends might not be cached when > using the "keepalive" directive. > > *) Bugfix: a socket leak might occur when using the "error_page" > directive to redirect early request processing errors, notably > errors > with code 400. > > *) Bugfix: the "return" directive did not change the response code when > returning errors if the request was redirected by the "error_page" > directive. > > *) Bugfix: standard error pages and responses of the > ngx_http_autoindex_module module used the "bgcolor" attribute, and > might be displayed incorrectly when using custom color settings in > browsers. > Thanks to Nova DasSarma. > > *) Change: the logging level of the "no suitable key share" and "no > suitable signature algorithm" SSL errors has been lowered from > "crit" > to "info". > > > -- > Maxim Dounin > http://nginx.org/ > _______________________________________________ > nginx-announce mailing list > nginx-announce at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx-announce > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lahiruprasad at gmail.com Tue Sep 25 18:32:48 2018 From: lahiruprasad at gmail.com (Lahiru Prasad) Date: Wed, 26 Sep 2018 00:02:48 +0530 Subject: Cache POST requests Message-ID: Hi, What is the best way to cache POST requests in Nginx. I'm familiar with using redis module to cache GET requests. But is it possible to use the same for POST ? Search showed that Nginx POST caching possible via disk cache. I'm thinking whether it would be a good idea to use a RAM disk for this. Regards, Lahiru Prasad. -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel at mostertman.org Tue Sep 25 18:52:05 2018 From: daniel at mostertman.org (=?UTF-8?Q?Dani=c3=abl_Mostertman?=) Date: Tue, 25 Sep 2018 20:52:05 +0200 Subject: Cache POST requests In-Reply-To: References: Message-ID: <9d62f512-d4e5-5fef-c37f-09a564f502b7@mostertman.org> Hi, On 2018-09-25 20:32, Lahiru Prasad wrote: > ? ? What is the best way to cache POST requests in Nginx. I'm familiar > with using redis module to cache GET requests. But is it possible to > use the same for POST ? It's possible to cache POST-requests, but it's generally not something you want to do. POST-data is in most cases more sensitive than GET-data, for, for instance, login-details. Use it with care. It needs to be specifically enabled. > Search showed that Nginx POST caching possible via disk cache. I'm > thinking whether it would be a good idea to use a RAM disk for this. This is certainly possible. I'm doing precisely this for my DNS-over-HTTPS setup. It required me, however, to be a bit inventful to get a populated $request_body to use as part of the cache key, but this was the result (open to suggestions as to avoid "mirror" here, but haven't really found any other proper solution): http { ??? proxy_cache_path??????? /dev/shm/dns ??????????????????????????? levels=1:2 ??????????????????????????? keys_zone=dns:5m ??????????????????????????? max_size=20m ??????????????????????????? inactive=1d ??????????????????????????? use_temp_path=off; # leave this to off for performance reasons! ??? server { ??? ??? ... ??????????????? # DNS over HTTPS ??????????????? location = /dns-query { ??? ??? ??? expires off; ??????????????????????? mirror =; # Don't touch, this is necessary to populate $request_body! ??????????????????????? ## Begin - FastCGI caching. ??????????????????????? proxy_cache???????????? dns; ??????????????????????? proxy_cache_methods???? GET HEAD POST; ??????????????????????? proxy_cache_key???????? "$scheme$request_method$host$request_uri$request_body"; ??????????????????????? proxy_cache_valid?????? 200 1m; # this is the only valid response of the proxy. ??????????????????????? proxy_ignore_headers??? "Cache-Control" ??????????????????????????????????????????????? "Expires" ??????????????????????????????????????????????? "Set-Cookie"; ??????????????????????? proxy_cache_use_stale?? error ??????????????????????????????????????????????? timeout ??????????????????????????????????????????????? updating ??????????????????????????????????????????????? http_429 ??????????????????????????????????????????????? http_500 ??????????????????????????????????????????????? http_503; ??????????????????????? proxy_cache_background_update?? on; ??????????????????????? ## End - FastCGI caching ??????????????????????? proxy_pass http://[::1]:8553; ??????????????? } ??? } } Kind regards, Dani?l Mostertman -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3994 bytes Desc: S/MIME Cryptographic Signature URL: From nginx-forum at forum.nginx.org Wed Sep 26 00:20:21 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Tue, 25 Sep 2018 20:20:21 -0400 Subject: Configuration problem: request default 15.15.15.15/ not working In-Reply-To: <000a01d454a7$bffa98d0$3fefca70$@roze.lv> References: <000a01d454a7$bffa98d0$3fefca70$@roze.lv> Message-ID: <60147c1602df06a5410a247cd0425339.NginxMailingListEnglish@forum.nginx.org> I've also tried adding "/" and throwing the same error. I have also added to the .conf file: location = / { include proxy_params; proxy_pass http://unix:/run/gunicorn.sock; } before the fragment location / {......} with the same error. This error is very strange. The configuration is very simple, but I can not find the problem. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281393,281411#msg-281411 From nginx-forum at forum.nginx.org Wed Sep 26 08:49:42 2018 From: nginx-forum at forum.nginx.org (anish10dec) Date: Wed, 26 Sep 2018 04:49:42 -0400 Subject: Enabling "Transfer-Encoding : chunked" In-Reply-To: <87ae8f6ef21391fe60f26b6530c2fd3b.NginxMailingListEnglish@forum.nginx.org> References: <20180924150439.GW56558@mdounin.ru> <87ae8f6ef21391fe60f26b6530c2fd3b.NginxMailingListEnglish@forum.nginx.org> Message-ID: <1e90960745a231b0b0523640a7221207.NginxMailingListEnglish@forum.nginx.org> We are using Nginx with DAV Module , where encoder is pushing the content. These content when being accessed is not coming with header "Transfer-Encoding : chunked" though these header is being added by Encoder. Below is version details : nginx version: nginx/1.10.2 built by gcc 4.8.5 20150623 (Red Hat 4.8.5-16) (GCC) built with OpenSSL 1.0.2k-fips 26 Jan 2017 TLS SNI support enabled configure arguments: --with-http_ssl_module --with-http_realip_module --with-http_addition_module --with-http_sub_module --with-http_dav_module --add-module=/opt/nginx-dav-ext-module-master --with-http_flv_module --with-http_mp4_module --with-http_gunzip_module --with-http_gzip_static_module --with-http_random_index_module --with-http_secure_link_module --with-http_stub_status_module --with-http_auth_request_module --with-mail --with-mail_ssl_module --with-file-aio --with-ipv6 Below is the nginx configuration where encoder is pushing the content on Nginx running on Port 81 location /packagerx { root /ram/streams_live/packagerx; dav_methods PUT DELETE MKCOL COPY MOVE; dav_ext_methods PROPFIND OPTIONS; create_full_put_path on; dav_access user:rw group:rw all:r; autoindex on; client_max_body_size 100m; } Below is the configuration from which Nginx running on Port 80 is used for accessing the content location / { root /ram/streams_live/packagerx; expires 1h; access_log /usr/local/nginx/logs/access_client.log lt-custom; proxy_buffering off; chunked_transfer_encoding on; types { application/dash+xml mpd; application/vnd.apple.mpegurl m3u8; video/mp2t ts; video/x-m4v m4v; audio/x-m4a m4a; text/html html htm shtml; text/css css; text/xml xml; image/gif gif; image/jpeg jpeg jpg; application/javascript js; application/atom+xml atom; application/rss+xml rss; text/mathml mml; text/plain txt; } } Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281371,281413#msg-281413 From stefan.mueller.83 at gmail.com Wed Sep 26 10:01:25 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Wed, 26 Sep 2018 12:01:25 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> Message-ID: <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> An HTML attachment was scrubbed... URL: From r at roze.lv Wed Sep 26 10:32:01 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 26 Sep 2018 13:32:01 +0300 Subject: Configuration problem: request default 15.15.15.15/ not working In-Reply-To: <60147c1602df06a5410a247cd0425339.NginxMailingListEnglish@forum.nginx.org> References: <000a01d454a7$bffa98d0$3fefca70$@roze.lv> <60147c1602df06a5410a247cd0425339.NginxMailingListEnglish@forum.nginx.org> Message-ID: <000001d45584$28fa66d0$7aef3470$@roze.lv> > I've also tried adding "/" and throwing the same error. I have also added to the > .conf file: > > location = / { > include proxy_params; > proxy_pass http://unix:/run/gunicorn.sock; } > > before the fragment location / {......} with the same error. > This error is very strange. The configuration is very simple, but I can not find the > problem. Again the error is not coming from nginx but your backend (you either didn't add the "/" handling in the right place or maybe didn't restart the gunicorn workers afterwards). The configuration change you made doesn't make any difference in the way nginx operated before, what I meant was something like this: location = / { return 301 /inicio/; } Obviously this is just a workaround and if you can manage to fix the backend this location block is not needed. rr From r at roze.lv Wed Sep 26 10:52:45 2018 From: r at roze.lv (Reinis Rozitis) Date: Wed, 26 Sep 2018 13:52:45 +0300 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> Message-ID: <000101d45587$0e87be30$2b973a90$@roze.lv> > I added include for the location config files may it makes it better readable but still no clue hoiw to reach UNIX socket proxied webserver in LAN. It's a bit unclear what is the problem or what you want to achieve? The nginx can't connect/proxy_pass to the socket files (what's the error)? Also I'm not sure how LAN goes together with unix socket files which are ment for local process communication (IPC) inside a single server instance. Is there a single server just with nginx and some other services (node/python etc) which create those socket files (/home/app1; /home/app2 ..) or you are trying to proxy some other applications which reside on other devices/servers inside LAN (to expose to WAN)? rr From stefan.mueller.83 at gmail.com Wed Sep 26 11:03:54 2018 From: stefan.mueller.83 at gmail.com (Stefan Mueller) Date: Wed, 26 Sep 2018 13:03:54 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <000101d45587$0e87be30$2b973a90$@roze.lv> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> Message-ID: I've just entered office :(. I will try to give you more details later this day. Le mer. 26 sept. 2018 ? 12:52, Reinis Rozitis a ?crit : > > I added include for the location config files may it makes it better > readable but still no clue hoiw to reach UNIX socket proxied webserver in > LAN. > > It's a bit unclear what is the problem or what you want to achieve? > > The nginx can't connect/proxy_pass to the socket files (what's the error)? > > > Also I'm not sure how LAN goes together with unix socket files which are > ment for local process communication (IPC) inside a single server instance. > Is there a single server just with nginx and some other services > (node/python etc) which create those socket files (/home/app1; /home/app2 > ..) or you are trying to proxy some other applications which reside on > other devices/servers inside LAN (to expose to WAN)? > > > rr > > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx > -------------- next part -------------- An HTML attachment was scrubbed... URL: From mdounin at mdounin.ru Wed Sep 26 12:14:09 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Wed, 26 Sep 2018 15:14:09 +0300 Subject: Enabling "Transfer-Encoding : chunked" In-Reply-To: <1e90960745a231b0b0523640a7221207.NginxMailingListEnglish@forum.nginx.org> References: <20180924150439.GW56558@mdounin.ru> <87ae8f6ef21391fe60f26b6530c2fd3b.NginxMailingListEnglish@forum.nginx.org> <1e90960745a231b0b0523640a7221207.NginxMailingListEnglish@forum.nginx.org> Message-ID: <20180926121408.GK56558@mdounin.ru> Hello! On Wed, Sep 26, 2018 at 04:49:42AM -0400, anish10dec wrote: > We are using Nginx with DAV Module , where encoder is pushing the content. > These content when being accessed is not coming with header > "Transfer-Encoding : chunked" though these header is being added by > Encoder. This is not going to work. The DAV module only makes files available once they are fully uploaded, while for the low-latency live HLS streaming the last segment needs to be sent to the clients while it is being produced. As previously suggested, if you want low latency live HLS streaming to work, you'll have to use proxying for the last segment (the one which is being written to). -- Maxim Dounin http://mdounin.ru/ From rob at cow-frenzy.co.uk Wed Sep 26 15:53:44 2018 From: rob at cow-frenzy.co.uk (Rob Fulton) Date: Wed, 26 Sep 2018 16:53:44 +0100 Subject: Nginx caching proxy dns name even when using variables Message-ID: <8ba83b8a-4604-53cd-29f2-a966b3a20037@cow-frenzy.co.uk> Hi, I'm using nginx to proxy to a host with a rapidly changing dns entry but I can't seem to get the proxy command the re-query dns using the vairable method suggested, the following is a excerpt from my config : server { ??? listen 443 ssl; ??? resolver 127.0.0.1 valid=20s; ??? set $proxy_server somehostname.com; ??? location / { ??? ??? proxy_pass https://$proxy_server/blue/content$request_uri; I'm using nginx 1.14, watching my dns logs I see no requests following nginx starting up. The upstream_addr value in my nginx logs also doesn't change. Any suggestions of why this isn't working as expected? Regards Rob From nginx-forum at forum.nginx.org Wed Sep 26 19:36:05 2018 From: nginx-forum at forum.nginx.org (Frank_Mascarell) Date: Wed, 26 Sep 2018 15:36:05 -0400 Subject: Configuration problem: request default 15.15.15.15/ not working In-Reply-To: References: Message-ID: <2767752265e4171d3cac2708068ec1e5.NginxMailingListEnglish@forum.nginx.org> Effectively it was necessary to restart gunicorn every time that I modify the file urls.py so that the changes take effect. I did not know that. I did it like that: $ systemctl daemon-reload $ systemctl restart gunicorn Thanks for the help. Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281393,281423#msg-281423 From stefan.mueller.83 at gmail.com Wed Sep 26 20:42:43 2018 From: stefan.mueller.83 at gmail.com (=?UTF-8?Q?Stefan_M=c3=bcller?=) Date: Wed, 26 Sep 2018 22:42:43 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> Message-ID: An HTML attachment was scrubbed... URL: From r at roze.lv Thu Sep 27 10:14:29 2018 From: r at roze.lv (Reinis Rozitis) Date: Thu, 27 Sep 2018 13:14:29 +0300 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> Message-ID: <000001d4564a$e0981960$a1c84c20$@roze.lv> > I have a Synology NAS what runs a nginx as default web server to run all their apps. I would like to extend it to meet the following. > > The purposes is that if the useraccount webapp1 is compromised, it will only affect webaoos1's web server.. and repeat this for all accounts/websites/whatever you want to keep separated. this approach use some more ram than having a single nginx instance do everything directly. > > Besides the question for the optimal setup to realize this While technically you could run per-user nginx listening on an unix socket and then make a proxy on top of those while doable it feels a bit cumbersome (at least to me). Usually what gets compromised is the (dynamic) backend application (php/python/perl/lua etc) not the nginx/webserver itself, also nginx by default doesn't run under root but 'nobody'. root is only needed on startup for the master process to open 80/443 (ports below 1024) then all the workers switch to an unprivileged user. One way of doing this would be instead of launching several nginxes just run the backend processes (like php-fpm, gunicorns etc) under particular users and let nginx communicate to those via sockets. I'm not familiar how Synology NAS internally separates different user processes but it has Docker support ( https://www.synology.com/en-global/dsm/feature/docker ) and even Virtual Machine Manager which technically would be a better user / application isolation. > I'm wondering how I can call the web server locally, within my LAN if I call them by the NAS's IP. It depends on your network topology. Does the Synology box has only LAN interface? Then you either need to configure portforwarding on your router or make a server/device which has both lan/wan interfaces (DMZ) and then can expose either on tcp level (for example via iptables) or via http proxy the internal websites/resources. If you make a virtual machine for each user you can then assign a separate LAN or WAN ip for each instance. But this kind of gets out of the scope of this mailing list. rr From marcin.wanat at gmail.com Thu Sep 27 12:51:25 2018 From: marcin.wanat at gmail.com (Marcin Wanat) Date: Thu, 27 Sep 2018 14:51:25 +0200 Subject: Multiple upstream backup directives in stream module Message-ID: Hi, i am using latest (1.15.4) nginx with stream module. I am trying to create config with one primary server that will accept no more than 10 connections and multiple backups that will also receive up to 10 connections only when previous backup server is already full. As exact behavior of multiple backup directives is not well explained in documentation i would like to ask if i am doing right. My current configuration is: stream { upstream backend { zone upstream_backend 64k; server 10.0.1.1:9306 max_conns=10; server 10.0.1.2:9306 max_conns=10 backup; server 10.0.1.3:9306 max_conns=10 backup; server 10.0.1.4:9306 backup; } server { listen 127.0.0.1:9306; proxy_connect_timeout 1s; proxy_timeout 3s; proxy_pass backend; } } I would like it to work like this: When we have up to 10 concurrent connections, all should go always to primary 10.0.1.1 When we have 25 concurrent connections it should work like this: -First 10 connections go to primary 10.0.1.1 -Next 10 connections go to backup 10.0.1.2 -Next 5 connections go to backup 10.0.1.3 When we have 45 concurrent connections it should work like this: -First 10 connections go to primary 10.0.1.1 -Next 10 connections go to backup 10.0.1.2 -Next 10 connections go to backup 10.0.1.3 -Next 15 connections go to backup 10.0.1.4 Will multiple backup directives work as i expected or will they just round robin between each of them up to max_conns limit ? Regards, Marcin Wanat -------------- next part -------------- An HTML attachment was scrubbed... URL: From rob at cow-frenzy.co.uk Thu Sep 27 14:27:03 2018 From: rob at cow-frenzy.co.uk (Rob Fulton) Date: Thu, 27 Sep 2018 15:27:03 +0100 Subject: Nginx caching proxy dns name even when using variables In-Reply-To: <8ba83b8a-4604-53cd-29f2-a966b3a20037@cow-frenzy.co.uk> References: <8ba83b8a-4604-53cd-29f2-a966b3a20037@cow-frenzy.co.uk> Message-ID: Hi, I?ve done some further testing on this today and discovered that the configuration works correctly when the proxy_pass url is accessed via http, I can see dns queries for the proxy_server url every minute as per the ttl. The moment I change the url to https, this stops. Is this a known limitation? Regards Rob > On 26 Sep 2018, at 16:53, Rob Fulton wrote: > > Hi, > > I'm using nginx to proxy to a host with a rapidly changing dns entry but I can't seem to get the proxy command the re-query dns using the vairable method suggested, the following is a excerpt from my config : > > server { > > listen 443 ssl; > > resolver 127.0.0.1 valid=20s; > set $proxy_server somehostname.com; > > location / { > > proxy_pass https://$proxy_server/blue/content$request_uri; > > > I'm using nginx 1.14, watching my dns logs I see no requests following nginx starting up. The upstream_addr value in my nginx logs also doesn't change. > > Any suggestions of why this isn't working as expected? > > Regards > > Rob > > > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx From mdounin at mdounin.ru Thu Sep 27 14:53:22 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Thu, 27 Sep 2018 17:53:22 +0300 Subject: Nginx caching proxy dns name even when using variables In-Reply-To: References: <8ba83b8a-4604-53cd-29f2-a966b3a20037@cow-frenzy.co.uk> Message-ID: <20180927145322.GO56558@mdounin.ru> Hello! On Thu, Sep 27, 2018 at 03:27:03PM +0100, Rob Fulton wrote: > I?ve done some further testing on this today and discovered that > the configuration works correctly when the proxy_pass url is > accessed via http, I can see dns queries for the proxy_server > url every minute as per the ttl. The moment I change the url to > https, this stops. Is this a known limitation? Most likely, the problem is that you have proxy_pass https://somehostname.com; somewhere in the configuration, without variables - so nginx resolves the name during configuration parsing. As a result, your construct set $proxy_server somehostname.com; proxy_pass https://$proxy_server; does not try to resolve the name, but rather ends up using the existing upstream for somehostname.com. If you want the name to be always resolved, comment out the proxy_pass without variables and/or use the variables there as well. -- Maxim Dounin http://mdounin.ru/ From arut at nginx.com Thu Sep 27 16:55:40 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Thu, 27 Sep 2018 19:55:40 +0300 Subject: Multiple upstream backup directives in stream module In-Reply-To: References: Message-ID: <20180927165540.GB92594@Romans-MacBook-Air.local> Hi, On Thu, Sep 27, 2018 at 02:51:25PM +0200, Marcin Wanat wrote: > Hi, > > i am using latest (1.15.4) nginx with stream module. > > I am trying to create config with one primary server that will accept no > more than 10 connections and multiple backups that will also receive up to > 10 connections only when previous backup server is already full. > > As exact behavior of multiple backup directives is not well explained in > documentation i would like to ask if i am doing right. There's only one level of upstream backup in nginx. Once all primary servers fail, the balancer switches to backup servers. All backup servers are equal. One of them is chosen based on the balancing algorithm. If it fails, another one is chosen and so on, just like it happens with primary servers. If you want several levels of backup, you still can do it with 'error_page 502'. Once all servers in your initial location fail (primary + 1st level backup), error 502 is generated. Then you continue in another location specified in error_page and repeat proxying with other servers (2nd level backup). > My current configuration is: > stream { > upstream backend { > zone upstream_backend 64k; > server 10.0.1.1:9306 max_conns=10; > server 10.0.1.2:9306 max_conns=10 backup; > server 10.0.1.3:9306 max_conns=10 backup; > server 10.0.1.4:9306 backup; > } > server { > listen 127.0.0.1:9306; > proxy_connect_timeout 1s; > proxy_timeout 3s; > proxy_pass backend; > } > } > > > I would like it to work like this: > > When we have up to 10 concurrent connections, all should go always to > primary 10.0.1.1 > > When we have 25 concurrent connections it should work like this: > -First 10 connections go to primary 10.0.1.1 > -Next 10 connections go to backup 10.0.1.2 > -Next 5 connections go to backup 10.0.1.3 > > When we have 45 concurrent connections it should work like this: > -First 10 connections go to primary 10.0.1.1 > -Next 10 connections go to backup 10.0.1.2 > -Next 10 connections go to backup 10.0.1.3 > -Next 15 connections go to backup 10.0.1.4 > > > Will multiple backup directives work as i expected or will they just round > robin between each of them up to max_conns limit ? > > > Regards, > Marcin Wanat > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From zchao1995 at gmail.com Fri Sep 28 08:56:10 2018 From: zchao1995 at gmail.com (Alex Zhang) Date: Fri, 28 Sep 2018 04:56:10 -0400 Subject: TLS1.3 ciphersuites configuration way Support Message-ID: Hello! It seems that OpenSSL has changed the way TLSv1.3 cipher suites are configured. According to the document https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_cipher_list.html, the function SSL_CTX_set_cipher_list isn?t suitable for TLSv1.3, instead, SSL_CTX_set_ciphersuits should be used. While Nginx?s now still use SSL_CTX_set_cipher_list to configure the SSL/TLS ciphers, which leads to the default cipher suits are used all the time. Is there any support plan for this? Best Regards Alex Zhang -------------- next part -------------- An HTML attachment was scrubbed... URL: From bee.lists at gmail.com Fri Sep 28 12:43:23 2018 From: bee.lists at gmail.com (Bee.Lists) Date: Fri, 28 Sep 2018 08:43:23 -0400 Subject: Bouncing to Default Server Block Message-ID: I have a test server up with 3 domains. First domain redirects port 80 to ssl 443. Second domain is just port 80. Third domain is just port 80. Second domain isn?t showing up, pointing to first domain. Third domain is working. Why would this happen? nginx.conf: # user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; keepalive_timeout 65s; gzip on; index index.html; passenger_app_env development; passenger_friendly_error_pages on; include /etc/nginx/conf.d/*.conf; server { listen 80; server_name domain1.ca www.domain1.ca; return 301 https://$server_name$request_uri; } server { listen 443 default_server; server_name domain1.ca www.domain1.ca; access_log /var/log/nginx/access_domain1.log; error_log /var/log/nginx/error_domain1.log warn; error_page 404 /404.html; client_max_body_size 3M; root /var/www/domain1/public; passenger_enabled on; passenger_base_uri /; location / { autoindex off; } location = /img/favicon.ico { access_log off;} ssl on; ssl_certificate /etc/nginx/ssl/domain1_ca.crt; ssl_certificate_key /etc/nginx/ssl/domain1.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_ciphers ALL:!aNULL:!EXPORT56:RC4+RSA:+HIGH:+MEDIUM:+LOW:+SSLv2:+EXP; } server { listen 80; server_name domain2 www.domain2; access_log /var/log/nginx/access_vp.log; error_log /var/log/nginx/error_vp.log debug; error_page 404 /404.html; root /var/www/domain2/public; passenger_enabled on; passenger_base_uri /; location / { autoindex off; # index /; } location = /favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } sendfile off; } server { listen 80; server_name domain3.com www.domain3.com; access_log /var/log/nginx/access_domain3.log; error_log /var/log/nginx/error_domain3.log error; error_page 404 /404.html; root /var/www/domain3/public; passenger_enabled on; passenger_base_uri /; location / { autoindex off; index /; } location = /img/favicon.ico { access_log off; log_not_found off; } location = /robots.txt { access_log off; log_not_found off; } sendfile off; } } Cheers, Bee From nginx-forum at forum.nginx.org Fri Sep 28 15:17:05 2018 From: nginx-forum at forum.nginx.org (wesdev) Date: Fri, 28 Sep 2018 11:17:05 -0400 Subject: Log DNS request Message-ID: I currently have a DNS proxy that I would like to sinkhole certain domains. For various reasons, I don't want to this within a DNS server itself. Is there any way within the stream module to retrieve the domain from the question portion of the DNS request? I don't see any existing variables that can help me achieve this. Any help appreciated. Thanks Posted at Nginx Forum: https://forum.nginx.org/read.php?2,281443,281443#msg-281443 From r at roze.lv Fri Sep 28 15:19:30 2018 From: r at roze.lv (Reinis Rozitis) Date: Fri, 28 Sep 2018 18:19:30 +0300 Subject: Bouncing to Default Server Block In-Reply-To: References: Message-ID: <001d01d4573e$a729f280$f57dd780$@roze.lv> > First domain redirects port 80 to ssl 443. > Second domain is just port 80. > Third domain is just port 80. > > Second domain isn?t showing up, pointing to first domain. Third domain is > working. Why would this happen? If you are testing just with a browser make sure you've cleaned the cache (or disable it, or use some other tools which don't have cache (like wget for example)). If by chance you opened the http://domain2 before it was added to the nginx configuration (or before nginx got reloaded) your browser cached the domain1 301 redirect and isn't actually doing a real request to the webserver anymore rr From arut at nginx.com Fri Sep 28 15:22:31 2018 From: arut at nginx.com (Roman Arutyunyan) Date: Fri, 28 Sep 2018 18:22:31 +0300 Subject: Multiple upstream backup directives in stream module In-Reply-To: <20180927165540.GB92594@Romans-MacBook-Air.local> References: <20180927165540.GB92594@Romans-MacBook-Air.local> Message-ID: <20180928152231.GE92594@Romans-MacBook-Air.local> Hi, On Thu, Sep 27, 2018 at 07:55:40PM +0300, Roman Arutyunyan wrote: > Hi, > > On Thu, Sep 27, 2018 at 02:51:25PM +0200, Marcin Wanat wrote: > > Hi, > > > > i am using latest (1.15.4) nginx with stream module. > > > > I am trying to create config with one primary server that will accept no > > more than 10 connections and multiple backups that will also receive up to > > 10 connections only when previous backup server is already full. > > > > As exact behavior of multiple backup directives is not well explained in > > documentation i would like to ask if i am doing right. > > There's only one level of upstream backup in nginx. Once all primary servers > fail, the balancer switches to backup servers. All backup servers are equal. > One of them is chosen based on the balancing algorithm. If it fails, another > one is chosen and so on, just like it happens with primary servers. > > If you want several levels of backup, you still can do it with 'error_page 502'. > Once all servers in your initial location fail (primary + 1st level backup), > error 502 is generated. Then you continue in another location specified in > error_page and repeat proxying with other servers (2nd level backup). Sorry, that was the answer for HTTP. In the stream module there's nothing like error_page, so this approach is not possible. > > My current configuration is: > > stream { > > upstream backend { > > zone upstream_backend 64k; > > server 10.0.1.1:9306 max_conns=10; > > server 10.0.1.2:9306 max_conns=10 backup; > > server 10.0.1.3:9306 max_conns=10 backup; > > server 10.0.1.4:9306 backup; > > } > > server { > > listen 127.0.0.1:9306; > > proxy_connect_timeout 1s; > > proxy_timeout 3s; > > proxy_pass backend; > > } > > } > > > > > > I would like it to work like this: > > > > When we have up to 10 concurrent connections, all should go always to > > primary 10.0.1.1 > > > > When we have 25 concurrent connections it should work like this: > > -First 10 connections go to primary 10.0.1.1 > > -Next 10 connections go to backup 10.0.1.2 > > -Next 5 connections go to backup 10.0.1.3 > > > > When we have 45 concurrent connections it should work like this: > > -First 10 connections go to primary 10.0.1.1 > > -Next 10 connections go to backup 10.0.1.2 > > -Next 10 connections go to backup 10.0.1.3 > > -Next 15 connections go to backup 10.0.1.4 > > > > > > Will multiple backup directives work as i expected or will they just round > > robin between each of them up to max_conns limit ? > > > > > > Regards, > > Marcin Wanat > > > _______________________________________________ > > nginx mailing list > > nginx at nginx.org > > http://mailman.nginx.org/mailman/listinfo/nginx > > > -- > Roman Arutyunyan > _______________________________________________ > nginx mailing list > nginx at nginx.org > http://mailman.nginx.org/mailman/listinfo/nginx -- Roman Arutyunyan From bee.lists at gmail.com Fri Sep 28 15:48:45 2018 From: bee.lists at gmail.com (Bee.Lists) Date: Fri, 28 Sep 2018 11:48:45 -0400 Subject: Bouncing to Default Server Block In-Reply-To: <001d01d4573e$a729f280$f57dd780$@roze.lv> References: <001d01d4573e$a729f280$f57dd780$@roze.lv> Message-ID: <163D0B76-2EE2-420F-95FF-0162B176F1DD@gmail.com> I often clear the cache and restart nginx. > On Sep 28, 2018, at 11:19 AM, Reinis Rozitis wrote: > > If you are testing just with a browser make sure you've cleaned the cache (or disable it, or use some other tools which don't have cache (like wget for example)). > > If by chance you opened the http://domain2 before it was added to the nginx configuration (or before nginx got reloaded) your browser cached the domain1 301 redirect and isn't actually doing a real request to the webserver anymore > > rr Cheers, Bee From sca at andreasschulze.de Fri Sep 28 16:51:23 2018 From: sca at andreasschulze.de (A. Schulze) Date: Fri, 28 Sep 2018 18:51:23 +0200 Subject: TLS1.3 ciphersuites configuration way Support In-Reply-To: References: Message-ID: <8ce6077b-4f1f-4993-ec4b-0d877dda2219@andreasschulze.de> Am 28.09.18 um 10:56 schrieb Alex Zhang: > It seems that OpenSSL?has changed the way TLSv1.3 cipher suites are configured.? > According to the document https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_cipher_list.html, the function SSL_CTX_set_cipher_list isn?t suitable for TLSv1.3,?instead, > SSL_CTX_set_ciphersuits should be used. While Nginx?s now still use SSL_CTX_set_cipher_list > to configure the SSL/TLS ciphers, which leads to the default cipher suits are used all the time. Hello, as far as I understand TLS1.3, there was a major redesign regarding cipher suites. It's no longer possible to combine key exchange, hash and cipher in so many ways. There are only 5 fixed cipher suites available (1) so the need to change them is significant lower. Andreas (1) https://tools.ietf.org/html/rfc8446#appendix-B.4 From bee.lists at gmail.com Fri Sep 28 16:56:47 2018 From: bee.lists at gmail.com (Bee.Lists) Date: Fri, 28 Sep 2018 12:56:47 -0400 Subject: Bouncing to Default Server Block In-Reply-To: <163D0B76-2EE2-420F-95FF-0162B176F1DD@gmail.com> References: <001d01d4573e$a729f280$f57dd780$@roze.lv> <163D0B76-2EE2-420F-95FF-0162B176F1DD@gmail.com> Message-ID: It?s fixed. Thank you > On Sep 28, 2018, at 11:48 AM, Bee.Lists wrote: > > I often clear the cache and restart nginx. Cheers, Bee From stefan.mueller.83 at gmail.com Fri Sep 28 17:01:04 2018 From: stefan.mueller.83 at gmail.com (Stefan Mueller) Date: Fri, 28 Sep 2018 19:01:04 +0200 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: <000001d4564a$e0981960$a1c84c20$@roze.lv> References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> Message-ID: Hoi Reinis, I aswered inline and applied colors for my (#6633ff) and your (#cc9933) text for better readability Thanks a lot for your input ? > I have a Synology NAS what runs a nginx as default web server to run all their apps. I would like to extend it to meet the following. ?> > The purposes is that if the useraccount webapp1 is compromised, it will only affect webaoos1's web server.. and repeat this for all accounts/websites/whatever you want to keep separated. this approach use some more ram than having a single nginx instance do everything directly. ?> ?> Besides the question for the optimal setup to realize this ? While technically you could run per-user nginx listening on an unix socket and then make a proxy on top of those while doable iit feels a bit cumbersome (at least to me). how do I do it eaxtly regardless if it is cumbersome?. Be it only for informational purpose but it makes the entire conversation a bit easier. Combined with the outcome of the section it could outline all possbiel options (incl. pro and cons). ? Usually what gets compromised is the (dynamic) backend application (php/python/perl/lua etc) not the nginx/webserver itself, also nginx by default doesn't run under root but 'nobody'. root is only needed on startup for the master process to open 80/443 (ports below 1024) then all the workers switch to an unprivileged user. So far I assuemd that the worker start the backend application the access to php is configured in the server block (my reference is What is the easiest way to enable PHP on nginx? and Serve PHP with PHP-FPM and NGINX ). My googling tells my that the PHP process usually runs with the permissions of the webserver. So I need to find a way that each webapplication (webapp1, webapp2, etc.) call its PHPs using a unique user account. When I read nginx + php run with different user id and changing php user to run as nginx user it must be somehow possible. Could share mor information how to achive that? One way of doing this would be instead of launching several nginxes just run the backend processes (like php-fpm, gunicorns etc) under particular users and let nginx communicate to those via sockets. ? ? I'm not familiar how Synology NAS internally separates different user processes but it has Docker support ( https://www.synology.com/en-global/dsm/feature/docker) and even Virtual Machine Manager which technically would be a better user / application isolation. Unfortunettely, my NAS does not support it > I'm wondering how I can call the web server locally, within my LAN if I call them by the NAS's IP. ? It depends on your network topology. ? ? Does the Synology box has only LAN interface? Then you either need to configure portforwarding on your router or make a server/device which has both lan/wan interfaces (DMZ) and then can expose either on tcp level (for example via iptables) or via http proxy the internal websites/resources The NAS has only one LAN interface. You suggest a more complex solution as just simple NAT port fowarding, as explained in Using router and internal LAN port forwarding device - Advice please :) . I have simple router, the Zyxel NBG6616 . it seems that is supports DMZ and if your refer to a static DHCP table by IP Table than it is supported as well but doens't look good for the http proxy. I still not understand how to forward to UNIX Sockets. Do I need custom ports entry in the prox part like NASIP:80001 -> Wepapp1ViaUNIXSocket NASIP:80002 -> Wepapp1ViaUNIXSocket I could run a DNS server on the NAS if that simplifies it. ? If you make a virtual machine for each user you can then assign a separate LAN or WAN ip for each instance. VMs aren't supported, so it isn't an option ? ? But this kind of gets out of the scope of this mailing list. ? ? rr -------------- next part -------------- An HTML attachment was scrubbed... URL: From r at roze.lv Fri Sep 28 18:49:16 2018 From: r at roze.lv (Reinis Rozitis) Date: Fri, 28 Sep 2018 21:49:16 +0300 Subject: Nginx as Reverse Proxy for multiple servers binded to proxy using UNIX sockets - how to reached in LAN In-Reply-To: References: <1abfde26-8aef-5e22-5c35-671d81826b6a@gmail.com> <1b6f8646-9b10-5cc8-993b-68ed92275ed9@gmail.com> <000101d45587$0e87be30$2b973a90$@roze.lv> <000001d4564a$e0981960$a1c84c20$@roze.lv> Message-ID: <000001d4575b$f4fc8fa0$def5aee0$@roze.lv> > how do I do it eaxtly regardless if it is cumbersome?. Well you configure each individual nginx to listen ( https://nginx.org/en/docs/http/ngx_http_core_module.html#listen ) on a unix socket: Config on nginx1: .. events { } http { server { listen unix:/some/path/user1.sock; .. } } Config on nginx2: .. server { listen unix:/some/path/user2.sock; ... } And then on the main server you configure the per-user virtualhosts to be proxied to particular socket: server { listen 80; server_name user1.domain; location / { proxy_pass http://unix:/some/path/user1.sock; } } server { listen 80; server_name user2.domain; location / { proxy_pass http://unix:/some/path/user2.sock; } } (obviously it's just a mockup and you need to add everything else like http {} blocks, root paths, SSL certificates (if available) etc) > So far I assuemd that the worker start the backend application the access to php is configured in the server block (my reference is What is the easiest way to enable PHP on nginx? and Serve PHP with PHP-FPM and NGINX). My googling tells my that the PHP process usually runs with the permissions of the webserver. Not exactly. php-fpm which is the typical way of running php under nginx are different processes/daemons each having their own configuration and communicate via FastCGI (http://nginx.org/en/docs/http/ngx_http_fastcgi_module.html ) via tcp or unix socket and both can run under different system users (php-fpm can manage even multiple pools each under own user and different settings) . The guide you linked on linode.com isn't fully correct "The listen.owner and listen.group variables are set to www-data by default, but they need to match the user and group NGINX is running as." The users don't need to match but the nginx user needs read/write permissions on the socket file (setting the same user just makes the guide simpler and less error prone). You can always put the nginx and php-fpm user in a group and make the socket file group writable (via listen.mode = 0660 in php-fpm.conf) > Unfortunettely, my NAS does not support it While the Synologies are Linux-based maybe running somewhat complicated setups (user/app isolation) and exposing to WAN are not the best option. Also it beats the whole idea of DSM being userfriendly centralized GUI tool. A regular pc/server with some native linux distribution (Ubuntu, Debian, Fedora, Opensuse etc) might be a better choice (and imho easier to experiment on) and you can always attach the NAS to the linux box (via NFS, samba/cifs, webdav etc). rr From mdounin at mdounin.ru Fri Sep 28 22:57:50 2018 From: mdounin at mdounin.ru (Maxim Dounin) Date: Sat, 29 Sep 2018 01:57:50 +0300 Subject: TLS1.3 ciphersuites configuration way Support In-Reply-To: References: Message-ID: <20180928225750.GR56558@mdounin.ru> Hello! On Fri, Sep 28, 2018 at 04:56:10AM -0400, Alex Zhang wrote: > It seems that OpenSSL has changed the way TLSv1.3 cipher suites are > configured. > According to the document > https://www.openssl.org/docs/man1.1.1/man3/SSL_CTX_set_cipher_list.html, the > function SSL_CTX_set_cipher_list isn?t suitable for TLSv1.3, instead, > SSL_CTX_set_ciphersuits should be used. While Nginx?s now still use > SSL_CTX_set_cipher_list > to configure the SSL/TLS ciphers, which leads to the default cipher suits > are used all the time. > > Is there any support plan for this? Yes, as long as the long-term approach to configure ciphers for different TLS versions will be clear. Right now the only thing which is clear is that the SSL_CTX_set_ciphersuits() interface is a band-aid which leads to various surprising and inconsistent results. See https://trac.nginx.org/nginx/ticket/1529 for details. -- Maxim Dounin http://mdounin.ru/ From zchao1995 at gmail.com Sat Sep 29 02:58:52 2018 From: zchao1995 at gmail.com (Alex Zhang) Date: Fri, 28 Sep 2018 22:58:52 -0400 Subject: TLS1.3 ciphersuites configuration way Support In-Reply-To: <20180928225750.GR56558@mdounin.ru> References: <20180928225750.GR56558@mdounin.ru> Message-ID: Hello Maxim! Thanks for the replay. I will patch our service to support this temporary way. -------------- next part -------------- An HTML attachment was scrubbed... URL: